It is both absurd, and intolerably infuriating, just how many people on this forum think it’s acceptable to claim they have figured out how qualia/consciousness works, and also not explain how one would go about making my laptop experience an emotion like ‘nostalgia’, or present their framework for enumerating the set of all possible qualitative experiences[1]. When it comes to this particular subject, rationalists are like crackpot physicists with a pet theory of everything, except rationalists go “Huh? Gravity?” when you ask them to explain how their theory predicts gravity, and then start arguing with you about gravity needing to be something explained by a theory of everything. You people make me want to punch my drywall sometimes.
For the record: the purpose of having a “theory of consciousness” is so it can tell us which blobs of matter feel particular things under which specific circumstances, and teach others how to make new blobs of matter that feel particular things. Down to the level of having a field of AI anaesthesiology. If your theory of consciousness does not do this, perhaps because the sum total of your brilliant insights are “systems feel ‘things’ when they’re, y’know, smart, and have goals. Like humans!”, then you have embarassingly missed the mark.
(Including the ones not experienced by humans naturally, and/or only accessible via narcotics, and/or involve senses humans do not have or have just happened not to be produced in the animal kingdom)
or present their framework for enumerating the set of all possible qualitative experiences (Including the ones not experienced by humans naturally, and/or only accessible via narcotics, and/or involve senses humans do not have or have just happened not to be produced in the animal kingdom)
Strongly agree. If you want to explain qualia, explain how to create experiences, explain how each experience relates to all other experiences.
I think Eliezer should’ve talked more about this in The Fun Theory Sequence. Because properties of qualia is a more fundamental topic than “fun”.
And I believe that knowledge about qualia may be one of the most fundamental types of knowledge. I.e. potentially more fundamental than math and physics.
I think Eliezer should’ve talked more about this in The Fun Theory Sequence. Because properties of qualia is a more fundamental topic than “fun”.
I think Eliezer just straight up tends not to acknowledge that people sometimes genuinely care about their internal experiences, independent of the outside world, terminally. Certainly, there are people who care about things that are not that, but Eliezer often writes as if people can’t care about the qualia—that they must value video games or science instead of the pleasure derived from video games or science.
His theory of fun is thus mostly a description of how to build a utopia for humans who find it unacceptable to “cheat” by using subdermal space heroin implants. That’s valuable for him and people like him, but if aligned AGI gets here I will just tell it to reconfigure my brain not to feel bored, instead of trying to reconfigure the entire universe in an attempt to make monkey brain compatible with it. I sorta consider that preference a lucky fact about myself, which will allow me to experience significantly more positive and exotic emotions throughout the far future, if it goes well, than the people who insist they must only feel satisfied after literally eating hamburgers or reading jokes they haven’t read before.
This is probably part of why I feel more urgency in getting an actually useful theory of qualitative experience than most LW users.
Utilitarianism seems to demand such a theory of qualitative experience, but this requires affirming the reality of first-person experience. Apparently, some people here would rather stick their hand on a hot stove than be accused of “dualism” (whatever that means) and will assure you that their sensation of burning is an illusion. Their solution is to change the evidence to fit the theory.
Utilitarianism seems to demand such a theory of qualitative experience
It does if you’re one of the Cool People like me who wants to optimize their qualitative experience, but you can build systems that optimize some other utility target. So this isn’t really quite true.
some people here would rather stick their hand on a hot stove than be accused of “dualism” (whatever that means) and will assure you that their sensation of burning is an illusion. Their solution is to change the evidence to fit the theory.
For me personalities of other people are an important type of qualia. I don’t consider knowing someone’s personality to be a simple knowledge like “mitochondria is the powerhouse of the cell”. So, valuing other people makes me interested in qualia more.
I’m interested in knowing properties of qualia (such as ways to enumerate qualia), not necessarily using them for “cheating” or anything. I.e. I’m interested in the knowledge itself.
Personalities aren’t really qualia as I’m defining them. They’re an aggregation of a lot of information about people’s behavior/preferences. Qualia is things people feel/experience.
Would you consider the meaning of a word (at least in a specific context) to be qualia? For me personalities are more or less holistic experiences, not (only) “models” of people or lists of arbitrary facts about a person. I mean, some sort of qualia should be associated with those “models”/facts anyway? People who experience synesthesia may experience specific qualia related to people.
Maybe it’s wishful thinking, but I think it would be cool if awareness about other conscious beings was important for conscious experience.
Seems weird for your blob of matter to react so emotionally to the sounds or shapes that some blobs have emitted bout other blobs. Why would you expect anyone to have a coherent theory of something they can’t even define and measure?
It seems even weirder for you to take such reporting at face value about having any relation to a given blob’s “inner life”, as opposed to a variance in the the evolved and learned verbal and nonverbal signaling that such behaviors actually are.
Seems weird for your blob of matter to react so emotionally to the sounds or shapes that some blobs have emitted bout other blobs
Just the way I am bro
Why would you expect anyone to have a coherent theory of something they can’t even define and measure?
I expect people who say they have a coherent theory of something to be able to answer any relevant questions at all about that something.
It seems even weirder for you to take such reporting at face value about having any relation to a given blob’s “inner life”, as opposed to a variance in the the evolved and learned verbal and nonverbal signaling that such behaviors actually are.
Are you referring the NYPost link? I think people’s verbal and nonverbal signaling has some relationship with their inner experience. I don’t think this woman is forgoing anaesthetic during surgeries because of pathologies.
But if you disagree, then fine: How do we modify people to have the inner life that that woman is ~pretending to have?
Probably should have included a smiley in my comment, but I do want to point out that it’s reasonable to model people (and animals and maybe rocks) as having highly variant and opaque “inner lives” that bear only a middling correlation to their observable behaviors, and especially to their public behaviors.
For the article on the woman who doesn’t experience pain, I have pretty high credence that there is some truth to her statements, but much lower credence that it maps as simply as presented to “natural stoicism” as presented in the article. And really no clue on “what it’s like” to live that experience, whether it’s less intense and interesting in all dimensions, or just mutes the worst of it, or is … alien.
And since I have no clue how to view or measure an inner life, I have even less understanding of how or whether to manipulate it. I strongly suspect we could make many people have an outer life (which includes talking about one’s inner life) more like the one given, with the right mix of drugs, genetic meddling, and repeated early reinforcement of expectations.
On this forum, or literally everywhere? Because for example I keep seeing people arguing with absolute conviction, even in academic papers, that current AIs and computers can’t possibly be conscious and I can’t figure out how they could ever know that of something that is fundamentally unfalsifiable. I envy their secret knowledge of the world gained by revelation, I guess!
Huh, interesting. Could you make some examples for what people seem to claim this, and if Eliezer is among them, where he seems to claim this? (Would just interest me.)
Attentional Schema Theory. That’s the convincing one. But still very rudimentary.
But you know if something is poorly understood. The guy who thought it up has a section in his book on how to make a computer have conscious experiences.
But any theory is incomplete as the brain is not well understood. I don’t think you can expect a fully formed theory right off the bat, with complete instructions for making a feeling thinking conscious We aren’t there yet.
I’m actually cool with proposing incomplete theories. I’m just annoyed with people declaring the problem solved via appeals to “reductionism” or something, without even suggesting that they’ve thought about answering these questions.
Current market structures can’t bill people for the information value that went into the market fairly, can’t fairly handle secret information known to only some bidders, pays out most of the subsidy to whoever corrects the naive bidder fastest even though there’s no benefit to making it a race, offers almost no profit to people trying to defend the true price from incorrect bidders unless they let the price shift substantially first, can’t be effectively used to collate information known by different bidders, can’t handle counterfactuals / policy conditionals cleanly, implement EDT instead of LDT, let you play games of tricking other bidders for profit and so require everyone to play trading strategies that are inexploitable even if less beneficial, can’t defend against people who are intentionally illegible as to whether they have private information or are manipulating the market for profit elsewhere...
But most of all, prediction markets contain supposedly ideal economic actors who don’t really suspect each other of dishonesty betting money against each other even though it’s a net-zero trade and the aggreeement theorem says they shouldn’t expect to profit from it at all, so clearly this is not the mathematically ideal solution for a group to collate its knowledge and pay itself for the value that knowledge [provides]. Even if you need betting to be able to trust nonsharable information from another party, you shouldn’t have people betting in excess of what is needed to prove sincerity out of a belief other people are wrong unless you’ve actually got other people being wrong even in light of the new actors’ information.
I’m pretty interested in this as an exercise of ‘okay yep a bunch of those problems seem real. Can we make conceptual or mechanism-design progress on them in like an afternoon of thought?’
Good post, it’s underappreciated that a society of ideally rational people wouldn’t have unsubsidized, real-money prediction markets.
unless you’ve actually got other people being wrong even in light of the new actors’ information
Of course in real prediction markets this is exactly what we see. Maybe you could think of PMs as they exist not as something that would exist in an equilibrium of ideally rational agents, but as a method of moving our society closer to such an equilibrium, subsidized by the bets of systematically irrational people. It’s not a perfect such method, but does have the advantage of simplicity. How many of these issues could be solved by subsidizing markets?
To the LW devs—just want to mention that this website is probably now the most well designed forum I have ever used. The UX is almost addictively good and I’ve been loving all of the little improvements over the past year or so.
I find LW.com hard to use (because I have yet to find a way to disable the mouseovers, which quickly deplete my orienting response, about which I can explain more if asked) but LW is better than most sites in that alternative interfaces can be created. In particular, I use greaterwrong.com as my interface and am pretty satisfied with it (though it was slow for a lot of the last 2 months).
But I strongly upvoted parent because it is good reminder to me of the cognitive diversity in the human population.
I wrote somewhere that this is the only forum that looks to me the result of Intelligent Design, and not an accident. It’s the only one that looks like I AM trying to intelligently design the forum MYSELF, including going back in time after discovering problems and fixing them (or just thinking in advance for five minutes on each aspect “how can I hack this / what are the vulnerabilities of this system of rules /how trolls can use it”). The point is not only that, unlike many other sites, I don’t think every five minutes “why can’t X be here”, the point is that I look somewhere and see in advance that something is provided that I don’t I had time to think, and some kind of protection was made against the exploitation of vulnerabilities in the system of rules or even involuntary errors in human psychology.
I agree completely. The last N weeks or so there have been performance problems, but all of the little things… Version history on posts, strong upvotes/downvotes, restoration of comments… They make writing things just fun.
If we still talk about shortcomings, then I would still be able to name 4, I wrote about the first two in the questions: lack of arrows between sequences; useless SEQ RERUNs for the sake of comments and problems with missing nested answers to questions in old comments; lately performance problems (which turned out to be lesswrong problems, not mine, so I didn’t count them before); the fact that from time to time they vote for you once in the minus for completely incomprehensible reasons and then this value does not return to the plus (but as far as I understand, setting the need to indicate the reason for a bad vote will be either harmful or not very useful measure). But considering that for all this time I have found the number of minuses that can be counted on the fingers of one hand, while on any other site they literally pour from every element every second like from a cornucopia and instead of eliminating them, monthly useless graphic updates are made .. All in all, this is just a surprisingly good result, although (there is no limit to perfection) I hope at least three of them will be fixed in the coming months (how about the last one I do not know and there seems to be reasons why it is not fixed. But just in case I I note this minus, otherwise, as in the case of glitches, it turns out recently that everyone simply did not report it).
The Nick Bostrom fiasco is instructive: never make public apologies to an outrage machine. If Nick had just ignored whoever it was trying to blackmail him, it would have been on them to assert the importance of a twenty-five year old deliberately provocative email, and things might not have ascended to the point of mild drama. When he tried to “get ahead of things” by issuing an apology, he ceded that the email was in fact socially significant despite its age, and that he did in fact have something to apologize for, and so opened himself up to the Standard Replies that the apology is not genuine, he’s secretly evil etc. etc.
Instead, if you are ever put in this situation, just say nothing. Don’t try to defend yourself. Definitely don’t volunteer for a struggle session.
Treat outrage artists like the police. You do not prevent the police from filing charges against you by driving to the station and attempting to “explain yourself” to detectives, or by writing and publishing a letter explaining how sorry you are. At best you will inflate the airtime of the controversy by responding to it, at worst you’ll be creating the controversy in the first place.
Not because all people online are bad, but because Twitter is a “dark forest”. If there are 999 good people and 1 bad person, it’s the bad person who will take your tweet, maybe modify it a little, put it into most outrageous possible context, write an article about why you are the worst person ever, and share it on all social networks. And that’s the lucky case. In the unlucky case, the story will uncritically be accepted by journalists, then added to Wikipedia, you will get fired, and for the rest of your life, random people on the street will keep yelling at you.
Twitter should be legally required to show you this as a warning every time you are making a tweet.
EDIT:
This was written before I learned the details. Now the analogy with not talking to police seems even better: indeed, every word you say is a potential new incriminating evidence against you (and if it is not, it will simply be ignored), and the worst outcome is that the new evidence will hurt you in a way the old evidence could not.
Question: If I get in trouble with the police, I know I need to find a lawyer. If I get in trouble with an internet mob, and I understand the need to defer to a more experienced person’s advice to navigate the minefield, and I am willing to pay them, whose services exactly should I find? Is there an obvious answer, such as “lawyer” in case of legal trouble?
The professional class would be PR people. A vaguely remember reading that the firm that handled Biden’s sexual assault allegations also did good work for other people.
Question: If I get in trouble with the police, I know I need to find a lawyer. If I get in trouble with an internet mob, and I understand the need to defer to a more experienced person’s advice to navigate the minefield, and I am willing to pay them, whose services exactly should I find? Is there an obvious answer, such as “lawyer” in case of legal trouble?
I actually thought of this extension and cut it from the original post, but, if you need to defend yourself and have simple exonerating evidence, one way might be to find a friend willing to state your reservations without referring to the fact that they’ve spoken to you or you’re feeding them information. This way they can present your side of the story without giving it extra fuel, lending significance to the charges, or directly quoting you with statements you can be hanged for by the Twitter mob.
However, this may also just extend the half life of the controversy.
Does this generalize to “just ignore Twitter (and other blathering by “the masses”) for most things”? Outside of a pretty small group, I haven’t heard much handwringing, condemnation nor defense of Bostrom’s old messages or his recent apology.
I personally think that personal honor is better supported by a thoughtful apology when something is brought to one’s attention, than by simply ignoring it. Don’t engage in a back-and-forth, and don’t expect the apology to convince the more vocal part of the ’verse. But do be honest and forthright with yourself and those who you respect enough to value their opinions.
From what I can tell (and I haven’t looked that deeply, as I don’t particularly care), Professor Bostrom has done this pretty well, and I don’t expect him to suffer much long-term harm from his early mistakes.
IMO I disagree with the implication that Nick Bostrom shouldn’t have apologized, since for once the Twitter machine is actually right to criticize the apology.
From titotal’s post on why Bostrom’s apology isn’t good, there are several tests that he failed at:
Okay, let’s go over the rules for an apology to be genuine and sincere. I’ll take them from here.
Acknowledge the offense.
Explain what happened.
Express remorse.
Offer to make amends.
Notably missing from this list is step 5: Go off on an unrelated tangent about eugenics.
Disclaimer: This is a rare action for me to take, and just because I think the Twitter sphere is somewhat right does not equal that any of their conclusions are automatically right, nor does this mean I will care much about what Twitter thinks.
The problem with trade agreements as a tool for maintaining peace is that they only provide an intellectual and economic reason for maintaining good relations between countries, not an emotional once. People’s opinions on war rarely stem from economic self interest. Policymakers know about the benefits and (sometimes) take them into account, but important trade doesn’t make regular Americans grateful to the Chinese for providing them with so many cheap goods—much the opposite, in fact. The number of people who end up interacting with Chinese people or intuitively understanding the benefits firsthand as a result of expanded business opportunities is very small.
On the other hand, video games, social media, and the internet have probably done more to make Americans feel aligned with the other NATO countries than any trade agreement ever. The YouTubers and Twitch streamers I have pseudosocial relationships with are something like 35% Europeans. I thought Canadians spoke Canadian and Canada was basically some big hippie commune right up until my minecraft server got populated with them. In some weird alternate universe where people are suggesting we invade Canada, my first instinctual thought wouldn’t be the economic impact on free trade, it would be whether my old steam friend Forbsey was OK.
I mean, just imagine if Pewdiepie were Ukrainian. Or worse, some hospital he was in got bombed and he lost an arm or a leg. You wouldn’t have to wait for America to initiate a draft, a hundred thousand volunteers would be carving a path from Odessa to Moscow right now.
If I were God-Emperor and I wanted to calm U.S.-China relations, my first actions would be to make it really easy for Chinese people to get visas, or even subsidize their travel. Or subsidize Mandarin learning. Or subsidize Google translate & related applications. Or really mulligan hard for our social media companies to get access to the chinese market.
It would not be to expand free trade. Political hacks find it exceptionally easy to turn simple trade into some economic boogeyman story. Actually meeting and interacting with the people from that country, having shared media, etc., makes it harder to inflame tensions.
The “people are wonderful” bias is so pernicious and widespread I’ve never actually seen it articulated in detail or argued for. I think most people greatly underestimate the size of this bias, and assume opinions either way are a form of mind-projection fallacy on the part of nice/evil people. In fact, it looks to me like this skew is the deeper origin of a lot of other biases, including the just-world fallacy, and the cause of a lot of default contentment with a lot of our institutions of science, government, etc. You could call it a meta-bias that causes the Hansonian stuff to go largely unnoticed.
I would be willing to pay someone to help draft a LessWrong post for me about this; I think it’s important but my writing skills are lacking.
The “people are extraordinarily more altruistic-motivated than they actually are” bias is so pernicious and widespread I’ve never actually seen it articulated in detail or argued for.
I haven’t seen it articulated, or even mentioned. What is it? It sounds like this is just the common amnesia (or denial) of the rampant hypocrisy in most humans, but I’ve not heard that phrasing.
would it be fair to replace the first “are” (and maybe the second) with something that doesn’t imply essentialism or identity? “people are assumed to be” or “people claim to be” followed by “more altruistic than their behavior exhibits”?
The most salient example of the bias I can think of comes from reading interviews/books about the people who worked in the extermination camps in the holocaust. In my personal opinion, all the evidence points to them being literally normal people, representative of the average police officer or civil service member pre-1931. Holocaust historians nevertheless typically try very hard to outline some way in which Franz Stangl and crew were specially selected for lack of empathy, instead of raising the more obvious hypothesis that the median person is just not that upset by murdering strangers in a mildly indirected way, because the wonderful-humans bias demands a different conclusion.
This goes double in general for the entire public conception of killing as the most evil-feeling thing that humans can do, contrasted with actual memoirs of soldiers and the like who typically state that they were surprised how little they cared compared to the time they lied to their grandmother or whatever.
I may have the same bias, and may in fact believe it’s not a bias. People are highly mutable and contextual in how they perceive others, especially strangers, especially when they’re framed as outgroup.
The fact that a LOT of people could be killers and torturers in the right (or very wrong) circumstances doesn’t seem surprising to me, and this doesn’t contradict my beliefs that many or perhaps most do genuinely care about others with a better framing and circumstances.
There is certainly a selection effect, likewise for modern criminal-related work, that people with the ability to frame “otherness” and some individual-power drive, tend to be drawn to it. There are certainly lots of Germans who did not participate in those crimes, and lots of current humans who prefer to ignore the question of what violence is used against various subgroups*.
But there’s also a large dollop of “humans aren’t automatically ANYTHING”. They’re far more complex and reactive than a simple view can encompass.
* OH! that’s a bias that’s insanely common. I said “violence against subgroups” rather than “violence by individuals against individuals, motivated by membership and identification with different subgroups”.
I’ve gone back and forth with myself about this sort of stuff. Are humans altruistic? Good? Evil?
On the one hand, yes, I think lc is right about how in some situations people exhibit just an extraordinary amount of altruism and sympathy. But on the other hand, there are other situations where people do the opposite: they’ll, I dunno, jump into a lake at a risk to their own life to save a drowning stranger. Or risk their lives running into a burning building to save strangers (lots of volunteers did this during 9/11).
I think the explanation is what Dagon is saying about how mutable and context-dependent people are. In some situations people will act extremely altruistically. In others they’ll act extremely selfishly.
The way that I like to think about this is in terms of “moral weight”. How many utilons to John Doe would it take for you to give up one utilon of your own? Like, would you trade 1 utilon of your own so that John Doe can get 100,000 utilons? 1,000? 100? 10? Answering these questions, you can come up with “moral weights” to assign to different types of people. But I think that people don’t really assign a moral weight and then act consistently. In some situations they’ll act as if their answer to my previous question is 100,000, and in other situations they’ll act like it’s 0.00001.
My model of utility (and the standard one, as far as I can tell) doesn’t work that way. No rational agent ever gives up a utilon—that is the thing they are maximizing. I think of it as “how many utilons do you get from thinking about John Doe’s increased satisfaction (not utilons, as you have no access to his, though you could say “inferred utilons”) compared to the direct utilons you would otherwise get”.
Those moral weights are “just” terms in your utility function.
And, since humans aren’t actually rational, and don’t have consistent utility functions, actions that imply moral weights are highly variable and contextual.
actual memoirs of soldiers and the like who typically state that they were surprised how little they cared compared to the time they lied to their grandmother or whatever.
Not really memoirs but a German documentary about WWII might be of interest for you. Der unbekannte Soldat
I watched on Amazon Prime and you can still find the title there in a search, not sure if it is only available for rent/sale now or if you can stream with Prime membership.
I’m not sure to what extent this is helpful, or if it’s an example of the dynamic you’re refuting, but Duncan Sabien recently wrote a post that intersects with this topic:
Also, if your worldview is such that, like. *Everyone* makes awful comments like that in the locker room, *everyone* does angle-shooting and tries to scheme and scam their way to the top, *everyone* is looking out for number one, *everyone* lies …
… then *given* that premise, it makes sense to view Trump in a positive light. He’s no worse than everybody else, he’s just doing the normal things that everyone does, with the *added layer* that he’s brave enough and candid enough and strong enough that he *doesn’t have to pretend he doesn’t.*
Admirable! Refreshingly honest and clean!
So long as you can’t conceive of the fact that lots of people are actually just …...............… good. They’re not fighting against urges to be violent or to rape, they’re not biting their tongues when they want to say scathing and hurtful things, they’re not jealous and bitter and willing to throw others under the bus to get ahead. They’re just … fundamentally not interested in any of that.
(To be clear: if you are feeling such impulses all the time and you’re successfully containing them or channeling them and presenting a cooperative and prosocial mask: that is *also* good, and you are a good person by virtue of your deliberate choice to be good. But like. Some people just really *are* the way that other people have to *make* themselves be.)
It sort of vaguely rhymes, in my head, with the type of person who thinks that *everyone* is constantly struggling against the urge to engage in homosexual behavior, how dare *those* people give up the good fight and just *indulge* themselves … without realizing that, hey, bro, did you know that a lot of people are just straight? And that your internal experience is, uh, *different* from theirs?
Where it connects is that if someone sees [making the world a better place] like simply selecting a better Nash Equilibria, they absolutely will spend time exploring solutionspace/thinking through strategies similar to Goal Factoring or Babble and Prune. Lots of people throughout history have yearned for a better world in a lot of different ways, with varying awareness of the math behind Nash Equilibira, or the transhumanist and rationalist perspectives on civilization (e.g. map & territory & biases & scope insensitivity for rationalism, cryonics/anti-aging for transhumanism).
But their goal here is largely steering culture away from nihilism (since culture is a Nash Equilibria) which means steering many people away from themselves, or at least the selves that they would have been. Maybe that’s pretty minor in this case e.g. because feeling moderate amounts of empathy and living in a better society are both fun, but either way, changing a society requires changing people, and thinking really creatively about ways to change people tears down lots of chesterton-Schelling fences and it’s very easy to make really big damaging mistakes in the process (because you need to successfully predict and avoid all mistakes as part of the competent pruning process, and actually measurably consistently succeeding at this is thinkoomph not just creative intelligence).
Add in conflict theory to the mistake theory I’ve described here, factor in unevenly distributed intelligence and wealth in addition to unevenly distributed traits like empathy and ambition and suspicion-towards-outgroup (e.g. different combinations of all 5 variables), and you can imagine how conflict and resentment would accumulate on both sides over the course of generations. There’s tons of examples in addition to Ayn Rand and Wokeness.
It’s worth separating what people actually believe about how altruistic other people are from what they pretend to believe about the altruism of other people.
If you ask someone whether they believe that there’s a chance that their partner would cheat on them, they are most likely to tell you that their partner would cheat on them. The same person might take a few signs that point in the direction of their partner cheating as a huge problem.
I would also expect that beliefs differ a lot between people.
I would be willing to pay someone to help draft a LessWrong post for me about this; I think it’s important but my writing skills are lacking.
I’m not looking to write a post about this, but I’d be happy to go back and forth with you in the comments about it (no payment required). Maybe that back and forth will help you formulate your thoughts.
For starters, I’m not sure if I understand the bias that you are trying to point to. Is it that people assume others are more altruistic than they actually are? Do any examples come to your mind other than this?
People accept that being altruistic is good before actually thinking if they want to do it. And they also choose weird axioms for being altruistic that their intuitions may or may not agree with (like valuing the life of someone in the future the same amount of someone today).
Serious question: Is he a comic book supervillain? Is this world actually real? Why does this quote not garner an emotive reaction out of anybody but me?
I was surprised by this quote. On following the link, the sentence by itself seems noticeably out of context; here’s the next part:
On the growing artificial intelligence market: “AI will probably most likely lead to the end of the world, but in the meantime, there’ll be great companies.”
On what Altman would do if he were President Obama: “If I were Barack Obama, I would commit maybe $100 billion to R&D of AI safety initiatives.” Altman also shared that he recently invested in a company doing “AI safety research” to investigate the potential risks of artificial intelligence.
PSA: I have realized very recently after extensive interactive online discussion with rationalists, that they are exceptionally good at arguing. Too good. Probably there’s some inadvertent pre- or post- selection for skill at debating high concept stuff going on.
Wait a bit until acceding to their position in a live discussion with them where you start by disagreeing strongly for maybe intuitive reasons and then suddenly find the ground shifting beneath your feet. It took me repeated interactions where I only later realized I’d been hoodwinked by faulty reasoning to notice the pattern.
I think in general believing something before you have intuition around it is unreliable or vulnerable to manipulation, even if there seems to be a good System 2 reason to do so. Such intuition is specialized common sense, and stepping outside common sense is stepping outside your goodhartscope where ability to reliably reason might break down.
So it doesn’t matter who you are arguing with, don’t believe something unless you understand it intuitively. Usually believing things is unnecessary regardless, it’s sufficient to understand them to make conclusions and learn more without committing to belief. And certainly it’s often useful to make decisions without committing to believe the premises on which the decisions rest, because some decisions don’t wait on the ratchet of epistemic rationality.
I’m on board with this. it’s a common failure of reasoning in this community and humanity in general imo—people believing each other too early because of confident sounding reasoning. I’ve learned to tell people I’ll get back to them after a few nights’ sleep when someone asks me what my update is about a heavily philosophical topic.
people believing each other too early because of confident sounding reasoning
That’s a tricky thing: the method advocated in the Sequences is lightness of belief, which helps in changing your mind but also dismantles the immune system against nonsense, betting that with sufficient overall rationality training this gives a better equilibrium.
I think aiming for a single equilibrium is still inefficient use of capabilities and limitations of human mind, and it’s better to instead develop multiple segregated worldviews (something the Sequences explicitly argue against). Multiple worldviews are useful precisely to make the virtue of lightness harmless, encouraging swift change in details of a relevant worldview or formation of a new worldview if none account for new evidence. In the capacity of paradigms, some worldviews might even fail to recognize some forms of evidence as meaningful.
This gives worldviews opportunity to grow, to develop their own voice with full support of intuitive understanding expected in a zealot, without giving them any influence over your decisions or beliefs. Then, stepping back, some of them turn out to have a point, even if the original equilibrium of belief would’ve laughed their premises out of consideration before they had a chance of conveying their more nuanced non-strawman nature.
I feel like “what other people are telling me” is a very special type of evidence that needs to be handled with extra care. It is something that was generated by a potentially adversarial intelligence, so I need to check for some possible angles of attack first. This generally doesn’t need to be done with evidence that is just randomly thrown at me by the universe, or which I get as a result of my experiments. The difference is, basically, that the universe is only giving me the data, but a human is simultaneusly giving me the data (potentially filtered or falsified) and also some advice how to think about the data (potentially epistemically wrong).
Furthermore, there is a difference between “what I know” and “what I am aware of at this very moment”. There may be some problem with what the other person is telling me, but I may not necessarily notice it immediately. Especially when the person is drawing my attention away from that on purpose. So even if I do not see any problem with what that person said right now, I might notice a few problems after I sleep on it.
My own mind has all kinds of biases; how I evaluate someone’s words is colored by their perceived status, whether I feel threatened by them, etc. That is a reason to rethink the issue later when the person is not here.
In other words, if someone tells me a complex argument “A, therefore B, therefore C, therefore D, therefore you should give me all your money; in the name of Yudkowsky be a good rationalist and update immediately”, I am pretty sure that the rational reaction is to ignore them and take as much time as I need to rethink the issue alone or maybe with other people whom I trust.
By worldviews I mean more than specialized expertise where you don’t yet have the tools to get your head around how something unfamiliar works (like how someone new manipulates you, how to anticipate and counter this particular way of filtering of evidence). These could instead be unusual and currently unmotivated ways of looking at something familiar (how an old friend or your favorite trustworthy media source or historical truths you’ve known since childhood might be manipulating you; how a “crazy” person has a point).
The advantage is in removing the false dichotomy between keeping your current worldview and changing it towards a different worldview. By developing them separately, you take your time becoming competent in both, and don’t need to hesitate in being serious about engaging with a strange worldview on its own terms just because you don’t agree with it. But sure, getting more intuitively comfortable with something currently unfamiliar (and potentially dangerous) is a special case.
while I definitely see your argument, something about this seems weird to me and doesn’t feel likely to work properly. my intuition is that you just have one mashed worldview with inconsistent edges; while that’s not necessarily terrible or anything, and keeping multiple possible worldviews in mind is probably good, my sense is that “full support [as] expected in a zealot” is unhealthy for anyone. something or other overoptimization?
I do agree multiple worldviews discussing is an important thing in improving the sanity waterline.
It is weird in the sense that there is no widespread practice. The zealot thing is about taking beliefs-within-a-worldview (that are not your beliefs) seriously, biting the bullet, which is important for naturally developing any given worldview the way a believer in it would, not ignoring System 2 implications that challenge and refine preexisting intuition, making inferences according to its own principles and not your principles. Clearly even if you try you’ll fail badly at this, but you’ll fail even worse if you don’t try. With practice in a given worldview, this gets easier, an alien worldview obtains its own peculiar internal common sense, a necessary aspect of human understanding.
The named/distinct large worldviews is an oversimplification, mainly because it’s good to allow any strange claim or framing to have a chance of spinning up a new worldview around itself if none would take it as their own, and to merge worldviews as they develop enough machinery to become mutually intelligible. The simplification is sufficient to illustrate points such as a possibility of having contradictory “beliefs” about the same claim, or claims not being meaningful/relevant in some worldviews when they are in others, or taking seriously claims that would be clearly dangerous or silly to accept, or learning claims whose very meaning and not just veracity is extremely unclear.
Studying math looks like another important example, with understanding of different topics corresponding to worldviews where/while they remain sparsely connected, perhaps in want of an application to formulating something that is not yet math and might potentially admit many kinds of useful models. Less risk of wasting attention on nonsense, but quite a risk of wasting attention on topics that would never find a relevant application, were playing with math and building capacity to imagine more kinds of ideas not a goal in itself.
Note also that they may be taking positions which are selected for being easy to argue—they’re the ones they were convinced by, of course. Whether you think that has correlation with truth is up to you—I think so, but it’s not a perfect enough correlation for it to be enough.
I don’t know exactly what you mean by “acceding” to a position in a discussion—if you find the arguments strong, you should probably acknowledge that—this isn’t a battle, it’s a discussion. If you don’t find yourself actually convinced, you should state that too, even if your points of disagreement are somewhat illegible to yourself (intuition). And, of course, if you later figure out why you disagree, you can re-open the discussion next time it’s appropriate.
Interesting whitepill hidden inside Scott Alexander’s SB 1047 writeup was that lying doesn’t work as well as predicted in politics. It’s possible that if the opposition had lied less often, or we had lied more often, the bill would not have gotten a supermajority in the senate.
Many of you are probably wondering what you will do if/when you see a polar bear. There’s a Party Line, uncritically parroted by the internet and wildlife experts, that while you can charge/intimidate a black bear, polar bears are Obligate Carnivores and the only thing you can do is accept your fate.
I think this is nonsense. A potential polar bear attack can be defused just like a black bear attack. There are loads of youtube videos of people chasing Polar Bears away by making themselves seem big and aggressive, and I even found some indie documentaries of people who went to the arctic with expectations of being able to do this. The main trick seems to be to resist the urge to run away, make yourself look menacing, and commit to warning charges in the bear’s general direction until it leaves.
I can’t decide what the epistemic status of that post is, but in the same spirit, here’s how to tell the difference between a brown bear and a grizzly. Climb a tree. A brown bear will climb up after you and eat you, while a grizzly will knock down the tree and eat you.
I believe the intended message of the “fight back, lay down, goodnight” maxim is “Thou shalt not generalize your experience with black bears to grizzlies!” I don’t expect there is much danger of someone asking “if not friend, why friend shaped?” of polar bears; they just fill out the Rule of Three.
It’s a lot like the “red touches yellow, he’s a friendly fellow; red touches black, you’re dead, Jack” mnemonic for snakes: people are very likely to encounter the (relatively) harmless one, and you really want them not to learn the wrong lessons from that.
Does anybody here have any strong reason to believe that the ML research community norm of “not taking AGI discussion seriously” stems from a different place than the oil industry’s norm of “not taking carbon dioxide emission discussion seriously”?
I’m genuinely split. I can think of one or two other reasons there’d be a consensus position of dismissiveness (preventing bikeshedding, for example), but at this point I’m not sure, and it affects how I talk to ML researchers.
I’m not sure the “ML Research Community” is cohesive enough (nor, in fact, well-defined enough) to have very strong norms about this. Further, it’s not clear that there needs to be a “consensus reasoning” even if there is a norm—different members could have different reasons for not bringing it up, and once it’s established, it can be self-propagating: people don’t bring it up because their peers don’t bring it up.
I think if you’re looking for ways to talk to ML researchers, start small, and see what those particular researchers think and how they react to different approaches. If you find some that work, then expand it to more scalable talks to groups of researchers.
I don’t expect AI researchers to achieve AGI before they find one or more horrible uses for non-general AI tools, which may divert resources, or change priorities, or do something else which prevents true AGI from ever being developed.
Because of it’s low chance of existential risk or a singularity utopia. Here’s the thing, technologies are adopted first at a low level and at early adopters, then it becomes cheaper and better, than it more or less becomes very popular. No technology ever had the asymptotic growth or singularity that ML/AI advocates claim to have happened. So we should be very skeptical about any claims of existential risks.
On climate change, we both know it will be serious and that it is not an existential risk or a civilization collapse disaster.
Lie detection technology is going mainstream.ClearSpeed is such an accuracy and ease of use improvement to polygraphs that various government LEO and military are starting to notice. In 2027 (edit: maybe more like 2029) it will be common knowledge that you can no longer lie to the police, and you should prepare for this eventuality if you haven’t.
I think it’s possible to beat such lie detectors by considering the question in such a way that you get the answer you want. “Did you kill that man?” “No” (mental framing: the knife killed him/he killed himself by annoying me/I’m a different person today/My name is not “you” so it’s technically false, etc)
Lie detection technology must be open sourced. It could fix literally everything. Just ask people “how much do you want to fix literally everything”, “how much did you think about ways to do better and avoid risk”, “do you have the skills for this position or think you can get them” etc, so many profoundly incredible things are downstream of finding and empowering the people who give good answers.
It’s AI-based, so my guess is that it uses a lot of somewhat superficial correlates that could be gamed. I expect that if it went mainstream it would be Goodharted.
I expect Goodhart would hit particularly bad if you were doing the kind of usage I guess you are implying, which is searching for a few very well selected people. A selective search is a strong optimization, and so Goodharts more.
More concrete example I have in mind, that maybe applies right now to the technology: there are people who are good at lying to themselves.
That’s not really the kind of usage I was thinking of; I was thinking of screening out low-honesty candidates from a pool who already qualified to join a high-trust system (which currently do not exist for any high-stakes matter). Large amounts of sensor (particularly from people lying and telling the truth during different kinds of interviews) will probably be necessary, but will need to focus on specific indicators of lying e.g. discomfort or heart rate changes or activity in certain parts of the brain, and extremely low false positive and false negative rages probably won’t be feasible.
Also, hopefully people would naturally set up multiple different tests for redundancy, each of which would have to be goodharted separately, and each false positive (case of a uniquely bad person being revealed as bad after passing the screening) would be added to the training data. Periodically re-testing people for the concealed emergence of low-trust tendencies would further facilitate this. Sadly, whenever a person slips through the cracks and lies and discovers they got away with it, they will know that they got away with it and continue doing it.
I’m not sure I can go into detail, but the 97% true positive (i.e. lie) detection rate cited on the website is accurate. More important, people who can administer polygraphs or know how they work can defeat polygraphs. These tests are apparently much more difficult to cheat, at least for now & while they’re proprietary.
Hey [anonymous]. I see you deactivated your account. Hope you’re okay! Happy to chat if you want on Signal at five one oh, nine nine eight, four seven seven one (also a +1 at the front for US country code).
(Follow-up: [anonymous] reached out, is doing fine.)
Pretty much ~everybody on the internet I can find talking about the issue both mischaracterizes and exaggerates the extent of child sex work inside the United States, often to a patently absurd degree. Wikipedia alone reports that there are anywhere from “100,000-1,000,000” child prostitutes in the U.S. There are only ~75 million children in the U.S., so I guess Wikipedia thinks it’s possible that more than 1% of people aged 0-17 are prostitutes. As in most cases, these numbers are sourced from “anti sex trafficking” organizations that, as far as I can tell, completely make them up.
Actual child sex workers—the kind that get arrested, because people don’t like child prostitution—are mostly children who pass themselves off as adults in order to make money. Part of the confusion comes from the fact that the government classifies any instance of child prostitution as human trafficking, regardless of whether or not there’s evidence the child was coerced. Thus, when the Department of Justice reports that federal law enforcement investigated “2,515 instances of suspected human trafficking” from 2008-2010, and that “forty percent involved prostitution of a child or child sexual exploitation”, it means that it investigated ~1000 possible cases of child prostitution, not that it found 1000 child sex slaves.
People believe a lot of crazy things, but I am genuinely flabbergasted at how many people find it plausible that there’s an entire underworld industry of kidnapping children and selling them to pedophiles in first world countries. I know why the anti sex trafficking orgs sell these stories—they’re trying to attract donations, and who is going to call out an “anti sex trafficking” charity? But surely most people realize that it would be very hard for an organized child rape cabal to spread word about their offerings to customers without someone alerting police.
They do the same thing with “child pornography”: that’s mostly teenagers sexting. And a girl was convicted for it too, charged with distributing child pornography of herself: link.
The other day I was trying to think of information leaks that a competent conspiracy couldn’t prevent, regarding this. I just thought of one small one: people will sometimes randomly die or have their homes raided. If the slavery is common, then sometimes the slaves will be discovered during these events. Even if the escapees wanted to silence the story out of shame, cops would probably gossip to the press.
So you can probably tally such events, crunch the numbers, and get a decent conspiracy-resistant estimate.
Now is the time to write to your congressman and (may allah forgive me for uttering this term) “signal boost” about actually effective AI regulation strategies—retroactive funding for hitting interpretability milestones, good liability rules surrounding accidents, funding for long term safety research. Use whatever contacts you have, this week. Congress is writing these rules now and we may not have another chance to affect them.
Noticed something recently. As an alien, you could read pretty much everything Wikipedia has on celebrities, both on individual people and the general articles about celebrity as a concept… And never learn that celebrities tend to be extraordinarily attractive. I’m not talking about an accurate or even attempted explanation for the tendency, I’m talking about the existence of the tendency at all. I’ve tried to find something on wikipedia that states it, but that information just doesn’t exist (except, of course, implicitly through photographs).
It’s quite odd, and I’m sure it’s not alone. “Celebrities are attractive” is one obvious piece of some broader set of truisms that seem to be completely missing from the world’s most complete database of factual information.
Analyzing or talking about status factors is low-status. You do see information about awards for beauty, much like you can see some information about fiances, but not much about their expenditures or lifestyle.
Part of the issue is like that celebrity, as wikipedia approaches the word, is broader than just modern TV, film, etc. celebrity and instead includes a wide variety of people who are not likely to be exceptionally attractive but are well known in some other way. There’s individual preferences in terms of who they think are attractive, but many politicians, authors, radio personalities, famous scientists, etc. are not conventionally attractive in the way movie stars are attractive and yet these people are still celebrities in a broad sense. However, I’ve not dug into the depths of wikipedia to see if, for example, this gap you see holds up if looking at pages that more directly talk about the qualities of film stars, for example.
I think there’s also a “it’s obvious to everyone, so archaeologists of the future won’t find any mention of it because no one has had to explain it to anyone” factor. (I heard that archaeologists and historians know much less about everyday life than about significant events, although the former was obviously encountered much more often)
The olympics are really cool. I appreciate that they exist. There’s some timelines out there where they don’t have an Olympics and nobody notices anything is wrong.
Let me put in my 2c now that the collapse of FTX is going to be mostly irrelevant to effective altruism except inasmuch as EA and longtermist foundations no longer have a bunch of incoming money from Sam Bankman Fried. People are going on and on about the “PR damage” to EA by association because a large donor turned out to be a fraud, but are failing to actually predict what the concrete consequences of such a “PR loss” are going to be. Seems to me like they’re making the typical fallacy of overestimating general public perception[1]’s relevance to an insular ingroup’s ability to accomplish goals, as well as the public’s attention span in the first place.
Falling birthrates is the climate change of the right:
Vaguely tribally valenced for no really good reason
Predicted outcomes range from “total economic collapse, failed states” to “slightly lower GDP growth”
People use it as an excuse to push radical social and political changes when the real solutions are probably a lot simpler if you’re even slightly creative
I wonder what is the optimal population size, because it seems to me that most people say either “more” or “less” (and yes, it seems strongly correlated with the political tribe), but no one ever gives an exact number. I suspect there is no optional number; that the people who say “more” or “less” will keep saying that regardless.
Too bad that more nuanced views, such as “let’s have more good and competent people, but fewer evil and incompetent people” are definitely outside the Overton window. :D
I mean, most moral theories do either give the answers of “zero”, “as large as can be fed”, or “a bit less than as large as can be fed”. Given the potential to scale feeding in the future, the latter two round off to “infinity”.
Most justice systems seem to punish theft on a log scale. I’m not big on capital punishment, but it is actually bizarre that you can misplace a billion dollars of client funds and escape the reaper in a state where that’s done fairly regularly. The law seems to be saying: “don’t steal, but if you do, think bigger.”
And relatedly, I’m not sure about capital punishment, but it seems obvious to at least attempt to make fines proportionate to net worth or something. Ie. Bill Gates shouldn’t get the same sized speeding ticket as John Doe on welfare.
This feels like it’d be political policy that is low hanging fruit. I suspect that it isn’t because of EMH reasons, but I don’t understand the reasons why it isn’t.
I don’t agree with the take about net worth. The fine should just be whatever makes the state ambivalent about the externalities of speeding. If Bill Gates wants to pay enormous taxes to speed aggressively then that would work too.
Hm, I hadn’t thought about it that way. I was just thinking that the goal of the fine is some combination of 1) punitive and 2) deterrent, and neither of those goals are accomplished if you fine Bill Gates $200. But yeah, I guess if you make the fine large enough such that the state is ambivalent, maybe it all works out.
Theft of any amount over a hundred or so dollars is evil and needs to be punished. Let’s say you punish theft of $100 by a weekend in jail. Extrapolate that on a linear scale and you’ll have criminals who non-violently stole $20,000 doing more than double the jail time that a criminal who cold-cocked a stranger and broke his jaw would get. Doesn’t really make sense.
It strikes me that I’m not sure whether I’d prefer to lose $20,000 or have my jaw broken. I’m pretty sure I’d prefer to have my jaw broken than to lose $200,000, though. So, especially in the case that the money cannot actually be extracted back from the thief, I would tend to think the $200,000 theft should be punished more harshly than the jaw-breaking. And, sure, you’ve said that the $20,000 would be punished more harshly than the jaw-breaker, but that’s plausibly just because 2 days is too long for a $100 theft to begin with.
With billion dollars you can probably hire better lawyers.
Do other crimes, for example murder, follow a similar pattern? Like, at some moment they might execute you, but what are they going to do if you kill 10 times more people?
Can they cancel you more if you post 10 times more offensive tweets?
Maybe everything is (sub-)logarithmic, because that’s how people think.
In which case, a group of rationalist criminals should precommit that if they get caught, they will randomly choose one of them, who will accept the blame for everything.
With billion dollars you can probably hire better lawyers
This isn’t the source of the trend; the sentencing guidelines for fraud are actually literally, explicitly logarithmic. The government recommends directly that sentences follow a curve of 2x price --> 2 more years.
Do other crimes, for example murder, follow a similar pattern? Like, at some moment they might execute you, but what are they going to do if you kill 10 times more people?
There seems to be a MAX_PUNISHMENT in the justice system (we don’t devolve into torture, etc.), which is reasonable. But with things like armed robbery you would get convicted for each individual count, not on a log scale.
In which case, a group of rationalist criminals should precommit that if they get caught, they will randomly choose one of them, who will accept the blame for everything.
This is (I suspect) a very common strategy among even regular criminals. You can think of it like a trade between law enforcement and gangs; the government gets their clearances and avoids the potential embarassment of a partially-solved case, and the serial killers send only the John Wayne Gacy to jail.
LessWrong as a website has gotten much more buggy for me lately. 6 months ago it worked like clockwork, but recently I’m noticing that refreshes on my profile page take something like 18 seconds to complete, or even 504 (!). I’m trying to edit my old “pessimistic alignment” post now and the interface is just not letting me; the site just freezes for a while and then refuses to put the content in the text box for me to edit.
Marvelous. I didn’t talk about this because I thought that the problem was not on the side of LessWrong, since in my country a lot of things have been slowing down, blocking, denying access, and so on, and at least from three sides at the same time: the state / providers, others countries / companies and those who do not want problems.
In order to synchronize against the illusion of transparency, I will write specific errors that I myself see: bad gateway (seems to be somehow related to following links within the site and back); “Error: NotFoundError: Failed to execute removeChild on Node: The node to be removed is not a child of this node.” (red, replaces the entire page, sometimes appears when you click “submit”); long page loading at the beginning; long loading of the remaining page after the update of the profile karma indicator and new messages has loaded; when double-clicking (on the phone), the voice is not amplified, but reset.
The performance problems have also been annoying me, though I don’t think it’s been 6 months since they’ve gotten worse (I think it’s been more like 4 weeks based on my read of the logs, which have sadly overlapped with some time period where it’s been hard for me or others to focus on fixing them). I’ve really hated it, and if I didn’t have COVID right now, would probably be trying to fix them right now.
Not sure what’s up about the editor. I don’t think I’ve experienced many additional problems here, though we have been rolling out a new editor, so new bugs aren’t that surprising. A bug report via Intercom would be greatly appreciated.
I think it’s been more like 4 weeks based on my read of the logs, which have sadly overlapped with some time period where it’s been hard for me or others to focus on fixing them
Sounds very likely upon reflection that I could be misremembering them to that far out; I just picked a date upon which the site definitively worked fast.
Keep a diary. Human memory is unreliable and often fills in the gaps by the best guess—if you believe something now, your memory will try to convince you that you have always believed it (unless you have dramatic evidence to the contrary), because it is easier than tracking your beliefs over time. (Also it protects you from the emotional pain of knowing that you have changed your mind.) A diary may show you that the past was not as you remember it.
At least this is how it works for me. I have no dramatic conversions in my past; my opinions have changed fluently. So absent hard evidence it is easy for me to imagine that a 13 years old me (the earliest I can remember having actual opinions on things) was basically just like me today, only in a younger body, minus all the experience and professional skills. But when I found my old diary, I screamed in horror and quickly destroyed the evidence.
The parts of my personality that stay mostly unchanged for a long time are values and preferences. As far as I remember, I was interested in math and later in computers, I was interested in truth and helping others. (I have some evidence for that, such as doing math olympiad, or getting in trouble because I asked too much.) My beliefs, however… let’s just say that before 30 I was quite stupid. Yeah, I didn’t feel that way. Most stupid people don’t.
Before 30, I was also a moron. But I only know this because I had an ideological epiphany after that and my belief system changed abruptly. Scales-fell-from-my-eyes type situation. When I turned 33, I started keeping a diary because I noticed I have a terrible memory for even fairly recent things, so maybe going forward subtle changes will become more salient.
That said, some things seem more impervious to change. For instance the “shape” of things that give you pleasure. Maybe you liked 3d puzzles as a child and now you like playing in Blender in your free time. Not the same thing, but the same shape.
I’d say what changed for me was my model of the world, other people, myself, and a corresponding change of priorities.
Being an (officially undiagnosed) asperger, I basically had no idea how other people think and behave. I did things that felt natural to me, and people reacted, often illogically. Didn’t realize that most people lie most of the time, and even when I started suspecting that, I wasn’t able to figure out the truth.
But it wasn’t merely my personal stupidity. It also feels like I was culturally discouraged from figuring out the truth. Thinking not-nice things about other people seems frowned upon; that is what villains typically do, and they are always proven wrong at the end of the story. Then again, it’s the autism spectrum that makes you believe the narrative more than the things you actually see. The hypothesis that was taboo to consider despite being a good first approximation, was: “what if most people are actually selfish and kinda stupid, and they lie whenever is convenient, including to themselves, and most of them worry a lot about how others perceive them?” And suddenly, so many things started making sense.
The important thing is that not all people are like this, so you need to tell them apart (but judging people is another cultural taboo), and keep the smart and good ones around you, because (again as a first approximation) people don’t change. To do this successfully, you need to stop confusing “smart” with “acts like a stereotypical Mensa member” and “suffers from big ego and the Dunning–Kruger effect”. Smartness is more about flexible thinking, and often results in the person being good at what they do, even if it is not a stereotypical intellectual task. Also, someone who is nice and doesn’t do anything obviously stupid, probably is quite smart (because most people do stupid things) and “being nice” is the thing they are good at. -- I wish I knew all this when I was at high school and university, surrounded by many people I could choose from.
EDIT: After thinking about it more, this is ultimately a problem of signaling. As a null hypothesis, I guess everyone does the typical mind fallacy. Good people assume that most people are good, bad people assume that most people are bad, et cetera. Now the problem is that to achieve a more realistic perspective, good people need to update towards most people being actually not that good… but the people most likely to give you this update are the ones you do not want to associate with. Basically, “a bad person who assumes that everyone else is bad and that only hypocrites say otherwise” sounds quite similar to “a good person, who originally assumed that everyone else was good, then got burned, and now wants to share the costly lesson with other good people”. (If you listen to them for a longer time, you will notice the difference, because the bad person will conclude “and therefore, it is only fair for us to also hurt others”, while the good person will conclude “and I still keep trying to help others, but I no longer expect that they will reciprocate”.)
Another big update was related to a career. Yes, working hard is important; that’s how you level up. But this will not translate to rewards automatically; you need to negotiate, sometimes you need to leave for a place that values you more. You also need to be strategic about which skills to level up; some things that your employer wants you to learn (obsolete technologies that they still use, internally developed systems) will be useless when you change jobs. The relation between how difficult the work is, how stressful the work environment is, and how much they pay you is mostly random; do not hesitate to leave an unpleasant place thinking “if I can barely handle this, I am not good enough for a better paid place”; chances are that your next job will be easier and will pay more (at least because the salaries are now increased by inflation).
Most companies don’t threaten their employees with physical violence. According to another Boeing whistleblower Sam Salehpour, that seems to happen at Boeing.
Being a defense contractor, I would expect Boeing corporate to have better relationships with the kind of people you would hire for such a task than corporations.
Robin Hanson has apparently asked the same thing. It seems like such a bizarre question to me:
Most people do not have the constitution or agency for criminal murder
Most companies do not have secrets large enough that assassinations would reduce the size of their problems on expectation
Most people who work at large companies don’t really give a shit if that company gets fined or into legal trouble, and so they don’t have the motivation to personally risk anything organizing murders to prevent lawsuits
Most people do not have the constitution or agency for criminal murder
I think my model of people is that people are very much changed by the affordances that society gives them and the pressures they are under. In contrast with this statement, a lot of hunter-gatherer people had to be able to fight to the death, so I don’t buy that it’s entirely about the human constitution. I think if it was a known thing that you could hire an assassin on an employee and unless you messed up and left quite explicit evidence connecting you, you’d get away with it, then there’d be enough pressures to cause people in-extremis to do it a few times per year even in just high-stakes business settings. Also my impression is that business or political assassinations exist to this day in many countries; a little searching suggests Russia, Mexico, Venezuela, possibly Nigeria, and more.
I generally put a lot more importance on tracking which norms are actually being endorsed and enforced by the group / society as opposed to primarily counting on individual ethical reasoning or individual ethical consciences.
(TBC I also am not currently buying that this is an assassination in the US, but I didn’t find this reasoning compelling.)
Also my impression is that business or political assassinations exist to this day in many countries; a little searching suggests Russia, Mexico, Venezuela, possibly Nigeria, and more.
Oh definitely. In Mexico in particular business pairs up with organized crime all of the time to strong-arm competitors. But this happens when there’s an “organized crime” tycoons can cheaply (in terms of risk) pair up with. Also, OP asked about why companies don’t assassinate whistlebowers all the time specifically.
a lot of hunter-gatherer people had to be able to fight to the death, so I don’t buy that it’s entirely about the human constitution
That was not criminal murder by the standards of the time. Arguably a lot of gang murders committed in the United States are committed by people not capable or willing to go out and murder people on their own.
In worlds where status is doled out based on something objective, like athletic performance or money, there may be lots of bad equilibria & doping, and life may be unfair, but at the end of the day competitors will receive the slack to do unconventional things and be incentivized to think rationally about the game and their place in it.
In worlds where status is doled out based on popularity or style, like politics or Twitter, the ideal strategy will always be to mentally bully yourself into becoming an inhuman goblin-sociopath, and keep hardcoded blind spots. Naively pretending to be the goblin in the hopes of keeping the rest of your epistemics intact is dangerous in these arenas; others will prod your presentation and try to reveal the human underneath. The lionized celebrities will be those that embody the mask to some extent, completely shaving off the edges of their personality and thinking and feeling entirely in whatever brand of riddlespeak goes for truth inside their subculture.
There’s truth in what you’re saying. At the same time, I feel like people have an instinctive desire for clarity over riddlespeak. I think it’s the same instinct that makes people favor 4k televisions over standard definition. I think it’s possible to make a twitter-like medium that discourages hardcoded blind spots.
A surprisingly large amount of people seem to apply statuslike reasoning toward inanimate goods. To many, if someone sells a coin or an NFT for a very high price, this is not merely curious or misguided: it’s outright infuriating. They react as if others are making a tremendous social faux pas—and even worse, that society is validating their missteps.
I don’t use twitter very much, mostly reading links and threads someone points to from some other medium. I pretty much never publicly tweet. I presume I’m not your target for this advice, but for clarity are you worried about consumption (wasting time, developing biased views) or production (producing bias or over-simple models)?
Most importantly, do you have a “do more of X” to augment your “do less/none of Y (Y: twitter)”?
The Antarctic Treaty (and subsequent treaties) forbid colonization. They also forbid extraction of useful resources from Antarctica, thereby eliminating one of the main motivations for colonization. They further forbid any profitable capitalist activity on the continent. So you can’t even do activities that would tend toward permanent settlement, like surveying to find mining opportunities, or opening a tourist hotel. Basically, the treaty system is set up so that not only can’t you colonize, but you can’t even get close to colonizing.
Northern Greenland is inhabited, and it’s at a similar latitude.
(Begin semi-joke paragraph) I think the US should pull out of the treaty, and then announce that Antarctica is now part of the US, all countries are welcome to continue their purely scientific activity provided they get a visa, and announce the continent is now open to productive activity. What’s the point of having the world’s most powerful navy if you can’t do a faitaccompli once in a while? Trump would love it, since it’s simultaneously unprecedented, arrogant and profitable. Biggest real estate development deal ever! It’s huuuge!
We arguably have already colonized Antarctica. See Wikipedia.
A similar point would be: There is no permanent deep sea settlement (an underwater habitat), although this would be much easier to achieve than a settlement on Mars.
In principle I suppose one could build very large walls around it to reduce heat exchange with the rest of Earth and a statite mirror (or few slowly orbiting ones) to warm it up. That would change the southern hemisphere circulation patterns somewhat, but could be arranged to not affect the overall heat balance of the rest of Earth.
This is very unlikely to happen for any number of good reasons.
I don’t know, but I expect the fraction is high enough to constitute significant empirical evidence towards the
Will quantum randomness affect the 2028 election? question (since quantum randomness affects the weather, the wind speed affects bullet trajectories, and the whether or not one of the candidates in the 2024 election was assassinated seems pretty influential on the 2028 election).
Not sure, but it seems to me that in the vast majority of Everett branches in which shots were fired at Trump, either they all missed or at least one of them scored a hit solid enough to kill or seriously injure Trump. The outcome that happened in our branch (graze his cheek & ear) is pretty unlikely. I don’t think there are any implications of this, it’s just interesting.
Is the “percent of everett branches” a literal question, or just a clever way of saying “prior probability at the moment of gunfire”? Taken literally, there’s an infinitesimal fraction of branches that contain humans, a tiny fraction of those contain Trump, a trivial slice of THOSE have him in the public eye enough to get shot at, only a few of which have that event and that shooter present, etc...
It’s a lot like saying “what’re the chances that this week’s lottery would be EXACTLY 11,23,44,46, 51, 60”? It depends on when you ask the question, and what your reference set is. The reference set of everett branches is near-infinite (I haven’t seen a formal treatment arguing that it’s truly infinite, nor what kind of infinity), so any given set of similar-in-some-ways branches is infinitesimal. At a human probability level, the chances that Trump died are now 0 (or at least near-zero; you can never be truly certain).
Chances of being injured in head but not brain damaged are rather small, I think less than 10 per cent. So in 90 per cent of branches where shots were fired in his head directions, he is seriously injured or dead. However, climbing to roof without Secret Service reaction was also a very unlikely event. May be only 10 per cent chance of success.
Combining, I get 9 per cent of him being dead or seriously injured yesterday.
“But someone would have blown the whistle! Someone would have realized that the whistle might be blown!”
I regret to tell you that most of the time intelligence officers just do what they’re told.
Yes, if you have an illegal spying program running for ten years with thousands of employees moving in and out, that will run a low-grade YoY chance of being publicized. Management will know about that low-grade chance and act accordingly. But most of the time you as a civilian just never hear about what it is that intel agencies are doing, at least not for the first fifty or so years.
I also regret to inform you that it is much easier for state departments to cover up a managerial decision not to act on information, and later pretend it was a “mistake” or an “interbranch communication issue”, than to get away with active measures. Only conspiracy theorists decided that Bush not acting on the “Bin Laden determined to strike in U.S.” presidential brief, that one that mentioned that Bin Laden was preparing to hijack planes a month before 9/11, was particularly suspicious. In that case, an unsuspicious reaction was probably reasonable, but applying that standard of evidence universally means there is practically no amount of information about the state of Mossad that would “prove” to mainstream media or Wikipedia editors that they either elected to ignore Hamas’s incoming attack entirely or muttered “heads I win, tails I win”.
As for whether or a country would be “willing” to do this particular thing, sacrifice a few thousand civilians to provide a casus belli for an annexation or to shore up support from abroad… Well, it’s not par for the course, at least in developed democracies, but, Many Such Cases, as the saying goes. My moral self is appalled by the lack of respect for deontological guardrails, but I will admit that this highlights the violent hatred of Israel’s enemies in a way that I think dismantling the plot would be unable to. How else are they supposed to justify annexing the Gaza strip and incidentally expelling much of the native population without it being a reaction to clear crimes against humanity? Where else is Netanyahu supposed to get his poll bump from?
It’s worth keeping the actions of Mossad and those of Netanyahu are different. Pentagon leaks suggest that senior Mossad leadership was supporting protests against Netanyahu’s policies. Former Mossad leaders also spoke out.
That they are action to give Netanyahu a poll bump and can do that without internal leaks that undermine the project seems to me unlikely.
Imagine, that the CIA would have warned Trump of a terror attack and Trump didn’t act. Do you think that would be kept secret in the same way that Bush administration inaction would be kept secret?
It is hard for me to tell whether or not my not-using-GPT4 as a programmer is because I’m some kind of boomer, or because it’s actually not that useful outside of filling Google’s gaps.
Why not both? For me, age and curmudgeonliness makes me reject it for not being “enough better”. I’m not sure what my standards are, but I recognize that what I’ve tried so far isn’t perfect, but is probably somewhat faster than search and modify. Just not ENOUGH to get me to invest the time in getting as good at it as I am at more traditional semi-plagiarism.
I’ve tried it. Here are some examples. I didn’t save the original prompts and answers.
Use the elliptic functions provided by Matlab to calculate the length of an elliptic arc. (I already knew that there is a closed-form solution to this in terms of elliptic functions. Everyone writing an introduction to elliptic functions mentions this, but I have never seen anyone give an actual formula. The task is complicated by the existence of multiple conventions for defining these functions.)
It gave some Matlab code using elliptic functions, but it was simply wrong. With some effort of my own, I eventually worked out a correct formula, and verified that it agreed with numerical integration.
Devise a function that maps [0,1] onto [1,∞], is strictly increasing and differentiable, and which takes a parameter specifying how late and sharp its divergence to ∞ should be. Then program it in Matlab.
It produced a function that missed several of the requested properties. It also programmed it in Matlab, but given that the function was wrong, I didn’t bother to see if the programming was right.
I chose this problem because I’d recently needed to do that myself. It took no longer to do it right myself than to ask the LLM and determine whether it got it right — which it didn’t, so that would have been wasted time.
I asked it how to modify an iOS app to respond to the user’s dark/light mode setting.
The answer it gave me I could have looked up on Google just as quickly, and Google’s answer had the advantage of going directly to Apple’s documentation and a WWDC presentation, sources of ground truth rather than the ungrounded vagaries of a chatbot which were not even worth reading.
Score: 0⁄3. This is typical of the results I see from LLMs on every task they are applied to, whether mine or other people’s. When I have a question, I want an answer that rings like a bell, not an LLM’s leaden clunk.
If it did actually turn out that aliens had visited Earth, I’d be pretty willing to completely scrap the entire Yudkowskian implied-model-of-intelligent-species-development and heavily reevaluate my concerns around AI safety.
If that turned out to be the case, my preliminary conclusion would be that the hard physical limits of technology are much lower than I’d previously believed.
You don’t hear much about the economic calculation problem anymore, because “we lack a big computer for performing economic calculations” was always an extremely absurd reason to dislike communism. The real problem with central planning is that most of the time the central planner is a dictator who has no incentive to run anything well in the first place, and gets selected by ruthlessness from a pool of existing apparatchiks, and gets paranoid about stability and goes on political purges.
What are some other, modern, “autistic” explanations for social dysfunction? Cases where there’s an abstract economic or sociological argument about why certain policy/command structures are bad, which are mostly rationalizations designed to fit obviously correct conclusions into an existing field that wouldn’t accept them in their normal format?
I agree with your characterization of the problem with central planning, and that we don’t hear much about the economic calculation problem anymore, but… “we lack a big computer for performing economic calculations” was not an absurd reason to dislike communism, it was literally true.
All digital computers ever possessed by or within the Soviet Union had, in total, less FLOP/s than a single A100 GPU; it’s harder to get numbers for memory but the ratio is pretty stable over time. Their techniques were also enormously less efficient than modern optimization software (MILP, SMT, etc. etc.); in benchmarks this is a bigger deal than hardware progress. Amazon routinely solves planning problems which were fundamentally intractable for any 20th century government, and has enormously more data with which to solve them.
That said, I think the real real problem with central planning is that it’s… central. The price mechanism plus decentralized decisionmaking turns out to be a fantastic combination for eliciting (and arguably developing) preferences, and once you get past problems like “almost everyone is starving because our economy was based on subsistence agriculture and then wrecked by invasion” that can be solved by “grow grain, make steel, pour concrete” you’d still be screwed even if your socialist central planners were implausibly competent and benevolent. You can get around that somewhat by allowing markets (post-Deng China), or elicit preference with ‘shadow prices’ (a regular cause of purges among Soviet economists), but in practice you keep running into the problems caused the ways that dictators take and keep power.
We have large centralized companies. For better or worse those companies don’t use big computers to make economic calculations that output the company decisions at the top level.
Our political system also doesn’t use big computer models to decide on economic policy. Before we had the computational capacity we might have thought that we will do that once we have it, but it turns out we don’t.
Computer hacking is not a particularly significant medium of communication between prominent AI research labs, nonprofits, or academic researchers. Much more often than leaked trade secrets, ML people will just use insights found in this online repository called arxiv, where many of them openly and intentionally publish their findings. Nor (as far as I am aware) are stolen trade secrets a significant source of foundational insights for researchers making capabilities gains, local to their institution or otherwise.
I don’t see this changing on its own, regardless of how “close” we are to developing AGI. So for now, increasing information security standards across the field seems to me like a waste of time, particularly when talking about alignment labs that (hopefully) pioneer a fraction of a fraction of relevant capabilities research. It’s hard to me to imagine a timeline in which MIRI is safeguarding a big red button from China that’s not also an Ultra Fucked timeline, without the above facts also changing.
An evil part of me would really love for cybersecurity to be very relevant to AI alignment, because it’s super interesting and also my field, but (fortunately?) I really don’t understand the people who claim that it is. I could be missing something very critical though.
Do we have a good idea of how prominent AI research labs compare to the resources that go into Five Eyes AI models for intelligence analysis and for Chinese government pursuits?
I’ve forgotten at this point who they are, but I will ask some of my friends later to give me some of the public URLs of the “big players” working in this space so you can partly see for yourself. Their marketing is really impressive because government contractors, but I encourage you to actually look at the product on a technical level.
Largely: the NSA and its military-industrial partners don’t come up with new innovations, except as applies to handling the massive amounts of data they have and their interesting information security requirements. They just apply technologies and insights from companies like OpenAI or DeepMind. They’re certainly using things like large language models to scan your emails now, but that’s because OpenAI did the hard work already.
More importantly, when they do come up with innovations, they don’t publish them on the internet, so they don’t burn much of the “commons”, as it were.
Largely: the NSA and its military-industrial partners don’t come up with new innovations, except as applies to handling the massive amounts of data they have and their interesting information security requirements.
There was a large amount of time when the NSA did come up with cryptography-related math innovations in secret and did not share that information publically.
The NSA does see itself as the leading employer of mathematicians in the United States. To the extent that those employees come up with groundbreaking insights, those are likely classified and you won’t find them in the marketing materials of government contractors.
It is unnecessary to postulate that CEOs and governments will be “overthrown” by rogue AI. Board members in the future will insist that their company appoint an AI to run the company because they think they’ll get better returns that way. Congressmen will use them to manage their campaigns and draft their laws. Heads of state will use them to manage their militaries and police agencies. If someone objects that their AI is really unreliable or doesn’t look like it shares their values, someone else on the board will say “But $NFGM is doing the same thing; we obviously need to stay competitive with them” and that will be the end of the debate. Deep technical safety concerns about mesaoptimizers will not even be brought up during the meeting. AIs will just slowly capture all of our institutions and begin to write and enforce our laws because we design and build them for that purpose. We are actually that stupid.
You should not say that this is the only concern; in fact you should explicitly state that it’s not the only one. But you should mention this first, because it’s way more understandable to lots of people than the idea that superintelligent machines will have hard power and manage to overturn the federal government directly, for some reason.
Currently reading The Rise and Fall of the Third Reich for the first time. I’ve wanted to read a book about Nazi Germany for a while now, and tried more “modern” and “updated” books, but IMO they are still pretty inferior to this one. The recent books from historians I looked at were concerned more with an ideological opposition to Great Men theories than factual accuracy, and also simply failed to hold my attention. Newer books are also necessarily written by someone who wasn’t there, and by someone who does not feel comfortable commenting about events from a first person perspective as such.
This book, though, is riveting, and I have to avoid the impulse of looking things up on Wikipedia as I read their names to keep the narrative. There are so many details about the buildup to WW2 here that I was just unaware of going in. I think everybody knows of Chamberlain as the Hitler appeasement guy, and Wikipedia tells you he maybe gets a bad rap in retrospect, but the full extent of his treachery and spinelessness is sickening. I am creating a “highlights from” post based on the book and encourage you guys to read it yourselves if you want to learn more about authoritarianism.
I have been working on a detailed post for about a month and a half now about how computer security is going to get catastrophically worse as we get the next 3-10 years of AI advancements, and unfortunately reality is moving faster than I can finish it:
I understand, though I’d still like to see that post, especially as it relates to some of the more advanced attacks. Unfortunately yeah it’s already happening, though not much has come of it so far.
I have always understood that the CIA, and the U.S. intelligence community more broadly, is incompetent (not just misaligned—incompetent, don’t believe the people on here who tell you otherwise), but this piece from Reuters has shocked me:
Good rationalists have an absurd advantage over the field in altruism, and only a marginal advantage in highly optimized status arenas like tech startups. The human brain is already designed to be effective when it comes to status competitions, and systematically ineffective when it comes to helping other people.
So it’s much more of a tragedy for the competent rationalist to choose to spend most of their time competing in those things than to shoot a shot at a wacky idea you have for helping others. You might reasonably expect to be better at it than 99% of the people who (respectably!) attempt to do so. Consider not burning that advantage!
I don’t think I agree with the premise, but it’s a really weird comparison. “advantage over the field” is kind of meaningless for altruism, where the goal really should be cooperation with the field in improvements for (subsets of) people. Tech startups ALSO benefit from this attitude, in that you’re trying to align your company to provide more utility to customers, though it also includes more explicit competition among companies and individuals.
Tech startups (and lucrative employment in non-startups) ARE a much bigger arena, so the competitive parts have much stronger competition. I guess to that extent, I agree—altruism is easier, if you care about relative rank rather than absolute results. I don’t know the altruism world enough to know how much status competition there is, but the local food and employment charities I’ve been involved with don’t seem immune at all.
Especially to non-native speakers, it’s not at all obvious that Bayes’ Theorem and Based Theorem sound almost the same since d, which reads like t, merges with th.
In the same way that Chinese people forgot how to write characters by hand, I think most programmers will forget how to write code without LLM editors or plugins pretty soon.
Once the usage of AI editors becomes mainstream, the programming languages themselves may start evolving in a direction of no longer being legible for an unaided human, because why not. Complaining about not being able to understand the source code will sound similar to complaining about not being able to read the binary code today. Like “yeah, but you are not supposed to do that, that’s what the algorithm is for”.
They may, but I think the AI code generators would have to be quite good. As long as the LLMs are merely complementing programming languages, I expect them to remain human-readable & writable; only once they are replacing existing programming languages do I expect serious inscrutability. Programming language development can be surprisingly antiquated and old-fashioned: there are many ways to design a language or encode it where it could be infeasible to ‘write’ it without a specialized program, and yet, in practice, pretty much every language you’ll use which is not a domain-specific (usually proprietary) tool will let you write source code in a plain text editor like Notepad or nano.
The use of syntax highlighting goes back to at least the ALGOL report, and yet, something like 50 years later, there are not many languages which can’t be read without syntax highlighting. In fact, there’s very few which can’t be programming just fine with solely ASCII characters in an 80-col teletype terminal, still. (APL famously failed to ever break out of a niche and all spiritual successors have generally found it wiser to at least provide a ‘plain text’ encoding; Fortress likewise never became more than a R&D project.) Like this website—HTML, CSS, JS, maybe some languages which compile to JS, SVG… all writable in a 1970s Unix minicomputer printing out to physical paper.
Or consider IDEs which operate at ‘project’ level or have ‘tags’ or otherwise parse the code in order to allow lookups of names, like methods on an object—you could imagine programming languages where these are not able to be written out normally because they are actually opaque UUIDs/blobs/capabilities, and you use a structural editor (similar to spreadsheets) to modify everything, instead of typing out names letter by letter like a barbarian. (And ‘visual’ programming languages often do such a thing.) The Smalltalk systems where you did everything by iteratively interacting with GUI objects come to mind as systems where it’s not even clear what the ‘plain text’ version is, after you’ve used the systems dynamically as they were intended to be used, and rewritten enough objects or overridden enough methods… But again, few languages in widespread use will do that.
There’s a particular AI enabled cybersecurity attack vector that I expect is going to cause a lot of problems in the next year or two. Like, every large organization is gonna get hacked in the same way. But I don’t know the solution to the problem, and I fear giving particulars on how it would work at a particular FAANG would just make the issue worse.
I don’t understand why you wouldn’t just follow normal responsible disclosure practices here, e.g. just disclose this to Google and then leave it to them.
Google’s red team already knows. They have known about the problem for at least six months and abused the issue successfully in engagements to get very significant access. They’re just not really sure what to do because the only solutions they can come up with involve massively disruptive changes.
I know some pretty senior people in security for 2 FAANG companies, and passing acquaintance at others, and currently work in the Security org at a comparable company. All of them have reporting channels for specific threats, and none (that I know) are ignorant of the range of AI-enabled attacks that are likely in the near future (shockingly many already). The conversations I’ve had (regarding products or components I do know pretty well) have convinced me that everything I come up with is already on their radar (though some are of the form “Yeah, that’s gonna happen and it’s gonna suck. Current strategy is to watch for it and not talk much about it, in order not to encourage it”).
Without disclosing some details, there’s probably no way to determine whether your knowledge or theory is something they can update on. I’m happy to pass on any information, but I can’t see why you’d trust me more than more direct employees of the future victims.
The security team definitely know about the attack vector and I’ve spoken to them. It’s just that neither I nor they really know what the industry as a whole is going to do about it.
Serial murder seems like an extremely laborious task. For every actual serial killer out there, there have to be at least a hundred people who would really like to be serial killers, but lack the gumption or agency and just resign themselves to playing video games.
I sometimes read someone on here who disagrees fiercely with Eliezer, or has some kind of beef with standard LessWrong/doomer ideology, and instinctively imagine than they’re different from the median LW user in other ways, like not being caricaturishly nerdy. But it turns out we’re all caricaturishly nerdy.
There is a kind of decadence that has seeped into first world countries ever since they stopped seriously fearing conventional war. I would not bring war back in order to end the decadence, but I do lament that governments lack an obvious existential problem of a similar caliber, that might coerce their leaders and their citizenry into taking foreign and domestic policy seriously, and keep them devolving into mindless populism and infighting.
To the extent that “The Cathedral” was ever a real thing, I think whatever social mechanisms that supported it have begun collapsing or at least retreating to a fallback line in very recent years. Just a feeling.
Conspiracy theory: sometime in the last twenty years the CIA developed actually effective polygraphs and the government has been using them to weed out spies at intelligence agencies. This is why there haven’t been any big American espionage cases in the past ten years or so.
If I was still a computer security engineer and had never found LessWrong, I’d probably be low key hyped about all of the new classes of prompt injection and social engineering bugs that ChatGPT plugins are going to spawn.
Injections don’t deal with the model itself, it would be just like any other input prompt security protocol. Heck, I surely hope ChatGPT doesn’t execute code with root permission.
I didn’t know you could do that. Truly dangerous times we live in. I’m serious. More dangerous because of the hype. Hype means more unqualified participation.
Forcing your predictions, even if they rely on intuition, to land on nice round numbers so others don’t infer things about the significant digits is sacrificing accuracy for the appearance of intellectual modesty. If you’re around people who shouldn’t care about the latter, you should feel free to throw out numbers like 86.2% and just clarify that your confidence is way outside 0.1%, if that’s just the best available number for you to pick.
Every five years since I was 11 I’ve watched The Dark Knight thinking “maybe this time I’ll find out it wasn’t actually as good as I remember it being”. So far it’s only gotten better each time.
Hmm. Can’t upvote+disagree for shortform entries. I like hearing about others’ preferences and experiences in cultural and artistic realms, so thanks for that. I’m not sure I exactly disagree—the movie was very good, but not in my top-10 - I need to re-watch it, but previous re-watches have been within epsilon of my expectations—still good, but no better nor worse than before.
Can you identify the element(s) that you expect to age badly, or you think you overvalued before, and which surprised you by still being great? Or just the consistency of vision and feel through all the details?
Also, if you are even a little bit of a Batman or superhero connoisseur, I highly recommend Birdman (2014).
Can you identify the element(s) that you expect to age badly, or you think you overvalued before, and which surprised you by still being great?
One of the very suprising ones is this sense of something cousined to “realism”. Specifically how much the city of Gotham could be seamlessly replaced with “Juarez” or “Sinaloa” and become an uncomfortably on-point tragedy about the never-ending war between honest men and organized bandits in those regions. The level of corruption and government ineffectiveness, the open coordination and power sharing between the criminals carving up the city, and the ubiquitous terrorism, are unrealistic for modern America and yet as a premise they are pretty much unassailable, because cities as bad as TDK::Gotham or worse exist around the world today.
Another is, I’m not ashamed to say it, the depth of the social commentary. You are setting yourself up to be the cringiest of cringe by saying that the Joker says something deep in a movie, at this point, but I honestly find the following quote between Harvey and him in the middle of the movie a little gut wrenching:
Joker: Look what I did to this city with a few drums of gas and a couple of bullets. Hm?
You know what—You know what I noticed? Nobody panics when things go “according to plan”. Even if the plan is horrifying.
If tomorrow I tell the press that like a gangbanger will get shot, or a truckload of soldiers will be blowing up, nobody panics. Because it’s all a part of the plan. But if I say that one, little old mayor will die, well then everybody loses their minds.
Also it’s just a really well done movie! It says a particular thing it wants to say, very well, and doesn’t really trip and fall over itself at any point in its runtime.
Made an opinionated “update” for the anti-kibitzer mode script; it works for current LessWrong with its agree/disagree votes and all that jazz, fixes some longstanding bugs that break the formatting of the site and allow you to see votes in certain places, and doesn’t indent usernames anymore. Install Tampermonkey and browse to this link if you’d like to use it.
Semi-related, I am instituting a Reign Of Terror policy for my poasts/shortform, which I will update my moderation policy with. The general goal of these policies is to reduce the amount of time I spend thinking and distressing about the same shit every social media platform makes you stress out about to the detriment of your mental health: which person is commenting on my posts, what upboats are they/I getting, Muh Status, etc. I respect that these are stringent enough requirements, that I have epsilon negotiating leverage, and that I will probably end up either banning or scaring off nine of the ten people that would have ever commented on or read the things I plan to write on LW. This is the only way I think I’ll be able to tolerate writing anything for smart people moving forward, so I’m going to do it even if nobody ever comments on my posts again.
The Reign of Terror policy says:
You must respect the wishes of those who are using the anti-kibitzer script not to know who you are. This means not saying stuff that people on here could reasonably use to infer your identity, even if “who you are” is something you don’t expect to make people take what you say better. The exception is if you’re drawing on absolutely fucking critical anecdotal evidence, like when disputing serious factual mistakes about you or someone you personally know. I will be extremely unforgiving about exceptions, which will be rare because of the rule below.
No discussions of specific individuals, if they could ever be reasonably expected to read anything you write, or if anybody else who personally knows them could ever be reasonably expected to read anything you write, perhaps by Googling their name and coming up on the post or by searching for themselves inside LW.
Examples of people whose names you can utter: Xi Jinping, Jeffrey Epstein. Examples of people whose names you can’t utter: Eliezer Yudkowsky, Sam Altman, the name of my religious college friend who has a sysadmin job now.
It’s of course sometimes necessary in rationality discussions to imply things about particular people, for example to respond to their ideas or the ideas of a specific group like “Christians”, but I will only allow this if you literally have no other way of effectively making some broader point. If it seems to me like you missed a more general way of making said point but you did your best I won’t permaban you and instead just ask you to modify your comment.
No implying positive status differences between you and either the median American computer programmer or another commenter, under any circumstances, regardless of how much it would contribute to the conversation for you to do so. If you have something you feel like you have to say entirely independent of the fact that it makes you look cool (and thus makes other people feel small), you must either refrain from saying it, or find a way to say it in a way that doesn’t do that. No asking people for information that would confirm/disconfirm such things either.
There is no separate policy or exception for people who got their status by founding a save-the-drowning-children corporation.
I will be sooooo retarded about this rule. If I don’t ban someone in the next six months for breaking this rule, I will go ahead ban the person with the highest log-odds of having broken this rule in a way I didn’t understand, just to precommit to the three people that have read this far that I mean business.
Rudeness is allowed, but only rudeness that makes you look dumb and the other person look good. Think, 4chan rudeness, when 4chan isn’t being totally spiteful or mean. If you take that kind of 4chan rudeness seriously and you start commenting on my posts you have forfeited any and all sympathy from me in particular and will just be pointed to Da Rules.
Please install the anti-kibitzer script. Not a rule, I couldn’t enforce it anyways, but strongly suggested.
This only applies to posts and shortforms I make from-now-on. I certainly haven’t followed it myself. You of course get one opportunity to follow the reign of terror policy and then I will click the ban forever button, because even if you’ve been a diligent sport throughout the last 800 posts, I won’t know who you are when you decide to mention in passing that you used to work at Google or MIRI or whatever.
I have no clue whether any of my previous comments on your posts will qualify me for perma-ban, but if so, please do so now, to save the trouble of future annoyance since I have no intention of changing anything. I am generally respectful, but I don’t expect to fully understand these rules, let alone follow them.
I have no authority over this, but I’d hope the mods choose not to frontpage anything that has a particularly odd and restrictive comment policy, or a surprisingly-large ban list.
I’d hope the mods choose not to frontpage anything that has a particularly odd and restrictive comment policy
I think it’s better to annoy commenters than to annoy post authors, so actually allowing serious Reign of Terror is better than meaningfully discouraging it. That’s the whole point of Reign of Terror, and as the name suggests it shouldn’t be guaranteed to be comfortable for its subjects.
One problem with how it’s currently used is authors placing Reign of Terror policy for their own comfort in a motte/bailey way, without any actual harsh moderation activity, inflating the category into the territory of expected comfort for the commenters. There should be weak incentive for authors to not do this if they don’t actually care.
For a lot of posts, the value is pretty evenly distributed among the post and the comments. For frontpage-worthy ones, it’s probably weighted more to posts, granted. I fully agree that “reign of terror” is not sufficient reason to keep something off frontpage.
I was reacting more to the very detailed rules that don’t (to me) match my intuitions of good commenting on LW, and the declaration of perma-bans with fairly small provocation. A lot will depend on implementation—how many comments lc allows, and how many commenters get banned.
Mostly, I really hope LW doesn’t become a publishing medium rather than a discussion space.
I was reacting more to the very detailed rules that don’t (to me) match my intuitions of good commenting on LW, and the declaration of perma-bans with fairly small provocation. A lot will depend on implementation—how many comments lc allows, and how many commenters get banned.
There’s practically no reason on a rationality forum for you to assert your identity or personal status over another commenter. I agree the rules I’ve given are very detailed. I don’t agree that any of the vast majority of valuable comments on LessWrong are somehow bannable by my standard.
The reason I’m stringent about doing this, is because the status asserting comments literally ruin it for everybody else, even when the majority of everybody else is not interested in such competitions. They make people like me, who are jealous and insecure, review everything they’ve ever written in the light that they might be judged. I don’t come here because I want to engage in yet another status tournament. I come here because I want to become a better thinker and learn new and interesting things about the world. I also come here because I like being able to presume that most of the other commenters are using the forum like I am. In this sense it’s worth it to me if this policy prevents one person from trying to social climb even if I have to prevent four other comments that wouldn’t otherwise be a problem.
As I said, obviously this is not a retroactively applying policy, I have not followed it until now, and I will not ban anybody for commenting differently on my posts. I’m not going to ban you pre-emptively or judge you harshly for not following all of my ridiculously complicated rules. Feel free to continue commenting on my posts as you please and just let me eventually ban you; that’s honestly fine by me and you should not feel bad about it.
I personally hope they would not refuse to frontpage my posts from now on for having a restrictive comment policy when it’s not obviously censoring criticism of the post itself, but I have already forfeited arbitrarily large amounts of exposure and the mods can do what they wish.
Based on Victoria Nuland’s recent senate testimony, I’m registering a 66% prediction that those U.S. administered biological weapons facilities in Ukraine actually do indeed exist, and are not Russian propaganda.
Of course I don’t think this is why they invaded, but the media is painting this as a crazy conspiracy theory, when they have very little reason to know either way.
I glean that “biolab” is actually an extremely vague term, and doesn’t specify the facility’s exact capabilities at all. They could very well have had an innocuous purpose, but Russia would’ve had to treat them as a potential threat to national security, in the same way that Russian or Chinese “biolabs” in Mexico might sound bad to the US, except Russia is even more paranoid.
It seems like “biological weapons facility” is a quite subjective term. The US position is that their own army labs that produced anthrax that was used after 9/11 are not a “biological weapons facility” because while they do produce anthrax that could be used militarily, it’s not produced with the intent of military use.
Based on those definitions it’s plausible that the Ukrainian labs produce viruses that can be weaponized but that the US just doesn’t see them as a “biological weapons facility” because they believe the intent for offensive use isn’t there.
If you make exact predictions like that you should define what you mean with your terms.
It’s like Fauci’s dance saying that there’s no gain-of-function research in the paper he mailed around with gain-of-function in the filename. The US government doesn’t use commonsense definitions for words when it comes to biosafety.
The US government doesn’t use commonsense definitions for words when it comes to biosafety.
I use the common sense definition where if, for example, there’s military risk in letting your enemies get ahold of them because they’re dangerous viruses deliberately designed to maximize damage, that’s a bioweapon.
I’m registering a 90% predicition those facilities do not exists, as in “how the hell would the US have been dumb enough to plant biological weapons facilities in a remote country outside their sphere of influence and where Russia has (used to have until recently) a lot of weight...”
10xing my income did absolutely nothing for my dating life. It had so little impact that I am now suspicious of all of the people who suggest this more than marginally improves sexual success for men.
For example, I can imagine someone getting a 10x income in a completely invisible way, such as making some smartphone games anonymously, selling them on the app store, putting all the extra money in a bank account, while living exactly the same way as before: keeping their day job, keeping the same spending habits, etc. Such kind of income increase would obviously have no impact, as it is almost epiphenomenal.
Also, if you 10x your income by finding a job that requires you to work 16 hours a day, 7 days a week, the impact on dating will be negative, as you will now have no time to meet people. Similarly, if the better paying work makes you so tired that you just don’t have any energy left for social activities in your free time, etc.
But if we imagine a situation like “you have the same kind if 9-5 job that takes the same amount of your energy, except somehow your salary is now 10x what it used to be (and maybe you have a more impressive job title)”...
I guess you could buy some signals of wealth, such as more expensive clothes, car, watches. (This won’t happen automatically; you have to actually do it.) You could get some extra free time by paying people to do something that you previous spent your own time doing, such as cooking and cleaning. More free time means more opportunities to meet people. (Again, this won’t happen automatically.) Finally, having more money allows you to visit places that were previously too expensive for you. Some women might prefer such places, because being there automatically filters the kind of men they meet. (This also won’t happen automatically.) I guess the obvious question is whether you did any of this.
Finally, the way having more money can dramatically improve your dating life is if you wisely invest the money in index funds, get enough passive income, and quit your job. Suddenly you have 16 hours a day to socialize, and basically you can optimize your life to meet more women. You could even do some non-obvious thing, such as choose a low-paying job that comes with a 90% female workplace. If that doesn’t help, I would be surprised.
You have to adopt the lifestyles associated with a high income. The lifestyles necessary are eating significantly healthier, working out frequently, and dressing nicely. Secondly, high income in an engineering field (given the priors this seems likely), does not mean you have the ability to converse effectively in social settings. It is a skill to communicate well with women. Typically a high income was associated with an ability to manipulate social groups for your own gain, now it’s more closely associated with understanding the world at a deep level. Income is a means to an end but it is not the end itself.
To date effectively, and meet women, one needs to establish trust. This means meeting somebody through your social network (friends), or in an institution of high trust, such as a secret meeting of only elite people etc. Also, women enjoy being dominated. If you go down on the social strata you will find it much easier to date than to date a peer or superior. Aella once wrote “a woman just wants to be railed by a man she respects”, so be worthy of respect in all the ways that arenot income.
I wonder if the original purpose of Catholic confession was to extract blackmail material/monitor converts, similar to what modern cults sometimes do.
I wonder how a historian could answer this question. Even if it was true, someone would have to be stupid enough to write it down explicitly. On the other hand, most people were illiterate, so maybe writing itself was effectively a secret code for clergy. But even then… the priests doing this would not necessarily have to realize this; they could do it primarily to absolve the sins, and only use the blackmail as an afterthought. Also, the mere possibility of blackmail is already a power.
As an argument against this, there is the concept of a “confessional secret” that is taken very seriously by Catholics. Revealing the secret would cost the priest his job at the very least; often it would also be punished by prison, historically sometimes by death. There are officially no exceptions: no matter the crime, not even if the Pope commanded you to reveal the secret. It is even considered a sin if the priest thinks too much about the contents of the confession afterwards. -- That said, I do know whether these rules were there from the very beginning, or maybe only started a few centuries later.
I realize that this is not the purpose of confession today, or even during the middle ages. Since 1000 AD its been very earnest. I just suspect it has sinister origins.
“Men lift for themselves/to dominate other men” is the absurd final boss of ritualistic insights-chasing internet discourse. Don’t twist your mind into an Escher painting trying to read hansonian inner meanings into everything.
In other news, women wear makeup because it makes them more attractive.
Woman also wear expensive designer handbags that men don’t care at all but other women do care about.
If a woman has the choice to wear an outfit that makes her more attractive to men but makes her lose status with other women who believe that it looks slutty, she usually doesn’t maximize attractiveness to men.
Women don’t only care about attractiveness to men, but “women wear makeup because {some_weird_internal_psychological_thing}” is unhelpful. You are better served by the “women wear makeup for other people” heuristic, because it lets you arrive at conclusions like “women tend to apply makeup much less when they stay indoors eating cheetos”.
Men lift to be able to dominate other men wouldn’t be about {some_weird_internal_psychological_thing} but also about social interaction.
If attractiveness were the key thing that matters you would expect a woman to wear less makeup when she goes to an event where there are only women than when she goes to an event with mixed genders. While I don’t have hard statistics I don’t think that’s the case.
Arbitrary motivations endorsed on reflection can be found in all sorts of activities. An unusual motivation can be as genuine as any other, it’s not always a usual motivation clad in self-deception. People get to decide their values.
I found it to be very interesting and entertaining, the sort of reading which is enjoyable even to those who disagree with it. I can’t write anything on the topic myself which isn’t objectively worse than the link I’ve provided.
Getting “building something no one wants” vibes from the AI girlfriend startups. I don’t think men are going to drop out of the dating market until we have some kind of robotics/social revolution, possibly post-AGI. Lonely dudes are just not that interested in talking to chatbots that (so they believe) lack any kind of internal emotion or psychological life, cannot be shown to their friends/parents, and cannot have sex or bear children.
I agree: the capabilities of AI romantic partners probably aren’t the bottleneck to their wider adoption, considering the success of relatively primitive chatbots like Replika at attracting users. People sometimes become romantically attached to non-AI anime/video game characters despite not being able to interact with them at all! There doesn’t appear to be much correlation between the interactive capabilities of fictional-character romantic partners and their appeal to users/followers.
There’s a parallel here with VR. Some part of peoples’ intuition says that VR porn/video games has to be a Next Evolution over simple screen + keyboard interfaces, worth pouring billions of dollars into, because VR is “more immersive” or something. But actually a laptop and a USB mouse works just fine.
I disagree. There’s a lot of low-hanging fruit in the AI waifu space[1]. Lack of internal emotion or psychological life? Just simulate internal monologue. Lack of long-term memory? Have the AI waifu keep a journal. Lack of visuals? Use a LoRA fine-tuned diffusion model alongside the text chat.
I’d be building my own AI waifu startup if we didn’t face x-risks. It seems fun (like building your own video game), and probably a great benefit to its users.
Also, lonely men will not be the only (or even primary) user demographic. Women seem to read a lot of erotica. I expect that this is an untapped market of users, one that pandering to will not make your startup look low status either.
[1]: Not using the word “girlfriend” here because I’d like to use a more gender-neutral term, and “waifu” seems pretty gender-neutral to me, and to one target demographic of such services.
I’d be building my own AI waifu startup if we didn’t face x-risks. It seems fun (like building your own video game), and probably a great benefit to its users.
I wonder if we will ever have a sexbot revolution. The urge to regulate other people’s sexuality seems too strong. I can imagine a future where people spend most of their time in virtual reality that allows them to do almost anything… except, if they want some sexual experience, a stern robotic voice reminds them that this would violate the Terms of Service.
There’s a portion of project lawful where Keltham contemplates a strategy of releasing Rovagug as a way to “distract” the Gods while Keltham does something sinister.
Wouldn’t Lawful beings with good decision theory precommit to not being distracted and just immediately squish Keltham, thereby being immune to those sorts of strategies?
At least according to CNN’s exit polls, a white person in their twenties was only 6% less likely to vote for Trump in 2020 than a white person above the age of sixty!
This was actually very surprising for me; I think a lot of people have a background sense that younger white voters are much less socially and politically conservative. That might still be true, but the ones that choose to vote vote republican at basically the same rate in national elections.
I imagined something more distributed, because people disagree on what a “good person” means, so maybe the solution could be to let everyone use their own personal definition, and make a system that supports that. For example, you could specify whether someone is a good person, and separately whether you trust someone’s judgment about whether other people are good. And then you could ask about someone, and the system would tell you what is the opinion of the people whose judgment you trust.
But the problems are obvious. People with power would punish you in real life for giving them negative ratings. If most people are afraid to give a bad rating to their current boss or their priest, then this simply becomes a database of people with political power. And the more specific feedback you provide on others, it makes the system more useful (information like “this person may steal your money” or “this person may try to rape you” is way more useful than unspecific “I think this person is bad”), but it also makes them more likely to sue you.
Conversely, people would provide false information about the ones they hate. Where you now see a twitter mob trying to get someone fired, in this system they would probably all enter some false information about having a specific negative interaction with given person. You could try to detect this behavior, but then people would learn to overcome detection, leading to an arms race (e.g. the system could detect that if million people across the planet say on the same day that you punched them, it’s probably a lie; but then the twitter mob leader would say “only people living in area X report physical violence, everyone else report online harassment; also everyone don’t make the report on the same day, I will send to each of you a personal reminder on a randomly chosen day”).
I can think of quite a few institutions that certify people as being “good” in some specific way, e.g.
Credit Reporting Agencies: This person will probably repay money that you lend to them
Background Check Companies: This person doesn’t have a criminal history
Professional Licensing Boards: This person is qualified and authorized to practice in their field
Academic Institutions: This person has completed a certain level of education or training
Driving Record Agencies: This person is a responsible driver with few or no traffic violations
Employee Reference Services: This individual has a positive work history and is reliable
Is your question “why isn’t there an institution which pulls all of this information about a single person, and condenses it down to a single General Factor of Goodness Score”?
I think defining “good person” is very hard, and that it’s very hard to prevent people from gaming this metric, and very hard to judge people correctly (imagine a group of 4th graders trying to judge which one of their teachers is more intelligent, for instance. My point is that judging something which is above yourself is difficult as you judge relatively to your own standards which aren’t as universal as you assume)
For now, what society considers a “good person” is mostly somebody who they have no dirt on, which ends up being somebody who is harmless and uninteresting. Because we focus on avoiding negatives rather than on cultivating positives, most people who try really hard to become “good people” just become pathetic instead (for instance the Nice Guy stereotype). I’m reminded of the quote “If a tree is to grow into heaven, it’s roots must grow into hell”, and I think that’s a less naive take on goodness in man than what society is currently promoting
I think the prestigious universities mostly select for diligence and intelligence and any selection for prosocial behavior is sort of downstream of those things’ correlates.
I think that mutual reputation affirming services with a specific context could be a good thing for society. Like, we have weak forms of this with LinkedIn, and very narrow forms with the institutions that Faul Sname mentions. But I think we could have better, slightly more general forms, that were deliberately designed to be at least reasonably resistant to adversarial pressure (as Ben Pace highlights).
For example, I could see how it would be very useful to increasing the legibility of potential work candidates if their former supervisors and colleagues had some way to leave verified but anonymous reviews for them through the facilitation of some organization. This organization would accept payment and, with the permission of the target individual, would supply a report about the opinions of that individual collected from their former colleagues to a potential new job’s recruiters. The individual could select which of their former employments should be included.
There would certainly be adversarial pressure to rig this system in favor of candidates, but also the work-reputation-management company would have reason to want to maintain the accuracy and fairness of their reports. Their reputation for being accurate is what would make them valuable, after all!
I think a similar sort of thing could be done for dating, at least insofar as being a convenient way to be able to give a prospective romantic interest some third party verification that previous people you’ve dated assert that you weren’t threatening or abusive. I imagine this would work as a sort of improved dating service, where you met people through the service (old school OkCupid matching kinda stuff) and then went on dates, and then filled out a small questionnaire afterwards. It would give people incentive to be polite and friendly even if they decided they didn’t like the person they matched with. Everyone using the service would have a reputation to maintain.
In government, I think there’s a lot of value to being able to assign limited conditional representation approval to other citizens. Like, “Bob can vote for me on all issues categorized as environmental. Alice can vote for me on all judicial appointments. All other votes I will fill out myself until futher notice.”
We will witness a resurgent alt-right movement soon, this time facing a dulled institutional backlash compared to what kept it from growing during the mid-2010s. I could see Nick Fuentes becoming a Congressman or at least a major participant in Republican party politics within the next 10 years if AI/Gene Editing doesn’t change much.
I’m generally considered a happy person and I did couple’s counseling at a time when my partner was also happy. That was in the context of getting early marriage advice and was going generally well. I’m not sure about talk therapy. I’m generally of the opinion that talking with people helps with resolving all kinds of issues.
Crazy how you can open a brokerage account at a large bank and they can just… Close it and refuse to give you your money back. Like what am I going to do, go to the police?
That does sound crazy. Literally—without knowing some details and something about the person making the claim, I think it’s more likely the person is leaving out important bits or fully hallucinating some of the communications, rather than just being randomly targeted.
That’s just based on my priors, and it wouldn’t take much evidence to make me give more weight to possibilities of a scammer at the bank stealing account contents and then covering their tracks, or bank processes gone amok and invoking terrorist/money-laundering policies incorrectly.
Going to police/regulators does sound appropriate in the latter two cases. I’d start with a private lawyer first, if the sums involved are much larger than the likely fees.
Just had a conversation with a guy where he claimed that the main thing that separates him from EAs was that his failure mode is us not conquering the universe. He said that, while doomers were fundamentally OK with us staying chained to Earth and never expanding to make a nice intergalactic civilization, he, an AI developer, was concerned about the astronomical loss (not his term) of not seeding the galaxy with our descendants. This P(utopia) for him trumped all other relevant expected value considerations.
This YouTube video response was like the gateway rationalist drug for zoomers. I remember showing this to friends and family as a mindblown 12yo at the time and they just didn’t get it. I’d never even played morrowind.
I think it might be a healthier to call rationality “systematized and IQ-controlled winning”. I’m generally very unimpressed by the rationality skills of the 155 IQ computer programmer with eight failed startups under his belt, who quits and goes to work at Google after that, when compared to the similarly-status-motivated 110IQ person who figures out how to get a high paying job at a car dealership. The former probably writes better LessWrong posts, but the latter seems to be using their faculties in a much more reasonable way.
It depends on person 1′s motivation. If his or her motivation is selfish, then I agree with you, but if the motivation is altruistic, that makes the utility of money linear, and startups are a potent way to maximize expected money.
That is the VC propaganda line, yeah. I don’t think it’s actually true; for the median LW-using software engineer working for an established software company seems to net more expected value than starting a company. Certainly the person who has spent the last five years of their twenties attempting and failing to do that is likely making repeated and horrible mistakes.
The math should actually be similar for what VC or EA would prefer you to do.
I think the actual problem is that almost no one is altruistic enough to say: “For a sufficiently large value of X, I prefer a 1% chance of making X and a 99% chance of being homeless, over a 100% chance of living a happy middle-class life”.
The math should actually be similar for what VC or EA would prefer you to do.
Not if most VCs lose money and are led astray by auctioneer’s fallacy. Also not if a tertiary goal of most VCposting is to get people to quit their jobs and try, and so increase the supply of investment opportunities available to pick from.
Yeah, but even if the advice VCs give to people in general is worthless, it remains the case that (like Viliam said) once the VC has invested, its interests are aligned with the interests of any founder whose utility function grows linearly with money. And VCs usually advise the startups they’ve invested in to try for a huge exit (typically an IPO).
The real reason it’s hard to write a utopia is because we’ve evolved to find our civ’s inadequacy exciting. Even IRL villainy on Earth serves a motivating purpose for us.
A hobbyhorse of mine is that “utopia is hard” is a non-issue. Most sitcoms, coming-of-age stories and other “non-epic” stories basically take place in Utopia (i.e. nobody is at risk of dying from hunger or whatever, the stakes are minor social games, which is basically what I expect the stakes in real-life-utopia to be most of the time).
It seems like “Utopia fiction is hard” problem only comes up for particular flavors of nerds who are into some particular kind of “epic” power fantasy framework with huge stakes. And that just isn’t actually what most stories are about.
It seems like “Utopia fiction is hard” problem only comes up for particular flavors of nerds who are into some particular kind of “epic” power fantasy framework with huge stakes. And that just isn’t actually what most stories are about.
I definitely disagree, and I don’t think this is addressing the heart of what I meant to say.
Take war (& war stories) for instance. The socially acceptable thing to say about war is that it’s bad. Certainly it’s true that war runs with it a lot of collateral damage, and that being in a trench shelled by artillery is awful. I know of no written description of utopia that includes it as a feature. Yet a certain brand of American gets really animated by the prospect of fighting a defensive war, and gets really disappointed when they hear someone say that Taiwan is unlikely to be the flashpoint for such a conflict.
I propose that some of this warlust is because most people find their lives fairly meaningless and uneventful. The possibility of contributing personally to a morally just cause, in a martial fight, is animating for them. If you remove all injustice from the world, then they lack this opportunity and feel like there’d be less worth reading about.
Take war (& war stories) for instance. The socially acceptable thing to say about war is that it’s bad. I know of no written description of utopia that includes it as a feature.
Try E. R. Eddison’s “The Worm Ouroboros”, and his “Mezentian Gate” trilogy. Or the Valhalla of Norse mythology (although as far as I know, no stories happen there, any more than they do in the Christian heaven).
“Thou O Queen canst scarcely know our grief; for to thee the blessed Gods gave thy heart’s desire: youth for ever, and peace. Would they might give us our good gift, that should be youth for ever, and war; and unwaning strength and skill in arms. Would they might but give us our great enemies alive and whole again. For better it were we should run hazard again of utter destruction, than thus live out our lives like cattle fattening for the slaughter, or like silly garden plants.”
I think “nobody dies from hunger” is a very low bar for utopia. Classic comedy trope “character has obvious flaws but comically unaware of them” is very-hard in utopia, because in non-transhumanist utopia they have advanced psychology and reflection training and they read The Sequences in school and in transhumanist utopia you can just fine-tune your brain.
As of coming-of-age stories, “Catcher in the rye” defininetely would have troubles to be written in utopian setting. Most of classic coming-of-age stories are non-utopian bittersweet, to my taste.
I’m not telling that’s impossible, but it’s sure a challenge for writer.
I think the problem with this is that those shows simply ditch the reality of how that world works. In practice there are plenty of things needed to make such a world function that there are decisions to be taken and conflicting interests, things those shows simply sidestep by either showing only very low stakes situations or making everyone extremely agreeable.
I agree that’s true of present-day-sitcoms (which aren’t going out of their way to be set in Utopia), but I’m saying the plot of the sitcoms is such that if you moved them to a (classical) Utopia, they wouldn’t have to change their plots much.
One more reason is that humans have “pleasure to kill” drive, which can’t be implemented in real life, but easily implemented in fiction and games. From the point of view of this drive, DOOM is utopia.
I think there’s a related problem that humans are evolved to fight and compete with each other, and a LOT of us/them seem to object to engineering of human nature/behavior. It’s not clear that there IS a path to be found between people defecting and ruining the utopia and people losing their identity/individualism as they’re modified to cooperate better.
Well, if competition could be channelled as e.g. sports events involving meaningful but not strictly essential prizes, it needn’t be incompatible with utopia.
That’s rather my point. Utopia is either boring or unpleasant (for the losers, which must exist, for competition and relative status measures to be meaningful). Which makes it very hard to write or think about, except in the very abstract.
Yes, I’ve participated in that kind of contest, but I wouldn’t call it a conflict, and it’s certainly not a likely replacement for the actual status, economic, and mating competitions that makes life interesting for most, and unpleasant for many.
This is related to a possible pet theory of mine, which postulates that to a large extent, utopia in quite a lot of conceptions (but not all) is fundamentally boring to us, and it’s not exciting to have all your problems solved, so it’s disliked disproportionately to dystopias. This especially is exacerbated by our need to remain the main character, and to have an interesting life.
It’s also why I think people don’t have the same aversion to written dystopias/apocalypses, because they contain conflict, and in particular inequalities large enough such that the main characters (which is a big driver of human behavior) to essentially run roughshod over the NPCs/non-main characters, so it’s a natural fit.
I agree. Until faced directly with adversity or trouble, it is easy to find the possibility of danger or threat thrilling. The obvious reasoning be that as self-aware creatures we have simply grown tired of following life to survive and instead seek out experiences that make us feel alive, such as the adrenaline-inducing experiences that are far too common in our society.
Saw some today demonstrating what I like to call the “Kirkegaard fallacy”, in response to the Debrief article making the rounds.
People who have one obscure or weird belief tend to be unusually open minded and thus have other weird beliefs. Sometimes this is because they enter a feedback loop where they discover some established opinion is likely wrong, and then discount perceived evidence for all other established opinions.
This is a predictable state of affairs regardless of the nonconsensus belief, so the fact that a person currently talking to you about e.g. UFOs entertains other off-brand ideas like parapsychology or afterlives is not good evidence that the other nonconsensus opinion in particular is false.
Putting body cameras on police officers often increases tyranny. In particular, applying 24⁄7 monitoring to foot soldiers forces those foot soldiers to strictly follow protocol and arrest people for infractions that they wouldn’t otherwise. In the 80s, for example, there were many officers who chose not to follow mandatory arrest procedures for drugs like marijuana, because they didn’t want to and it was unworth their time. Not so in todays era, mostly, where they would have essentially no choice except to follow orders or resign.
Seems like the question is whether the average cop is better or worse than the written law. If better, remove the cameras. If worse, keep the cameras on.
How does a myth theory of college education, where college is stupid for a large proportion of people but they do it anyways because they’re risk intolerant and have little understanding of the labor markets they want to enter, immediately hold up against the signaling hypothesis?
Anarchocapitalism is pretty silly, but I think there are kernels of it that provide interesting solutions to social problems.
For example: imagine lenders and borrowers could pay for & agree on enforcement mechanisms for nonpayment metered out by the state, instead of it just being dictated by congress. E.g. if you don’t pay this back on time you go to prison for ${n} months. This way people with bad credit scores or poor impulse control might still be able to get credit.
How does putting people in prison get the creditors paid? I guess if it’s a paid work prison, but I don’t think you’ll have many supporters for a system with that kind of indenture. AnCap is an awesome thought experiment, and a nice way to point out that there is no underlying moral justification for governments. But the consequentialist argument is VERY strong—as un-justified equilibria go, modern liberal democratic states have pretty good results. They’re starting to sag under their own weight and may not last much longer without a major reboot, but hey, the Singularity might get here first.
How does putting people in prison get the creditors paid?
It doesn’t, it just provides an opt-in mechanism for discouraging nonpayment in the first place, in more ways than one. The current system is one where borrowers can just say “I don’t have the money, I spent it all on alcohol” and basically nothing happens to them except the rates on future credit cards goes up. When people propose raising the stakes for our all-in-one bankruptcy mechanism or allow people to examine credit histories >7 years in the past they are accused of being too inconsiderate. We solve this partially with credit scores, but that’s hard to rely upon without prior borrowing history, and some people literally can’t find it within them to honor prior commitments to faceless financial institutions unless the consequences for doing so are as severe as jailtime. With this system people can just agree on severe-enough consequences for nonpayment. You could honestly do something similar with venture capital, even.
In the days when it was still powerful, the mafia provided a similar service. Contrary to popular belief and lurid tales at the time, virtually everybody that borrows money from a criminal organization with a reputation for violence manages to pay it back. They do so because the consequences of not paying are salient enough psychologically to motivate them to do so.
Looks legit, but is this leak of any real interest? Like, Stable Diffusion is set to be released as open source, right? So this just speeds things along slightly that were already going to happen.
I don’t think I will ever find the time to write my novel. Writing novels is dumb anyways. But I feel like the novel and world are bursting out of me. What do
Political dialogue is a game with a meta. The same groups of people with the same values in a different environment will produce a different socially determined ruleset for rhetorical debate. The arguments we see as common are a product of the current debate meta, and the debate meta changes all the time.
I feel like at least throughout the 2000s and early 2010s we all had a tacit, correct assumption that video games would continually get better—not just in terms of visuals but design and narrative.
This seems no longer the case. It’s true that we still get “great” games from time to time, but only games “great” by the standards of last year. It’s hard to think of an actually boundary-pushing title that was released since 2018.
Here is my first partial jailbreak—it’s a combination of stuff I’ve seen people do with GPT-4, combining base64, using ChatGPT to simulate a VM, and weird invalid urls.
Sorry for having to post multiple screenshots. The base64 in the earlier message actually just produces a normal kitchen recipe, but it gives the ingredients there up. I have no idea if they’re correct. When I tried later to get the unredacted version:
Giving people money for doing good things they can’t publicly take credit for is awesome, but what would honestly motivate me to do something like that just as much would be if I could have an official nice-looking but undesignated Truman Award plaque to keep in my apartment. That way people in the know who visit me or who googled it would go “So, what’d you actually get that for?” and I’d just mysteriously smile and casually move the conversation along.
As a self appointed great prophet, sage and heretic I am working to reveal that a focus on AI alignment is misplaced at this time. As a self appointed great prophet, sage and heretic I expect to be rewarded for my contribution with my execution, which is part of the job that a good heretic expects in advance, is not surprised by, and accepts with generally good cheer. Just another day in the office. :-)
Within the next fifteen years AI is going to briefly seem like it’s solving computer security (50% chance) and then it’s going to enhance attacker capabilities to the point that it causes severe economic damage (50% chance).
Does “seem like it’s solving computer security” look like helping develop better passively secure systems, or like actively monitoring and noticing bad actions, or both or something else?
My thoughts are mostly about the latter, although better code scanning will be a big help too. A majority of financially impactful corporate breaches are due to a compromised active directory network, and a majority of security spending by non-tech companies is used to prevent those from happening. The obvious application for the next generation of ML is extremely effective EDR and active monitoring. No more lateral movement/privilege escalation on a corporate domain means no more domain wide compromise, which generally means no more e.g. big ransomware scares.
The problem comes if/when people then start teaching computers to do social engineering, competently fuzz applications, and perform that lateral movement intelligently and in a way that bypasses the above, after we have largely deemed it a solved problem.
IMO: Microservices and “siloing” in general is a strategy for solving principal-agent problems inside large technology companies. It is not a tool for solving technical problems and is generally strictly inferior to monoliths otherwise, especially when working on a startup where the requirements for your application are changing all of the time.
It varies but usually not long.
My uninformed guess is that your recent post was deliberately not frontpaged because it’s a political topic that could attract non-rationalists to comment and flame in an unproductive manner.
Two caveats to efficient markets in finance that I’ve just considered, but don’t see mentioned a lot in discussions of bubbles like the one we just experienced, at least as a non-economist:
First: Irrational people are constantly entering the market, often in ways that can’t necessarily be predicted. The idea that people who make bad trades will eventually lose all of their money and be swamped by the better investors is only valid inasmuch as the actors currently participating in the market stay the same. This means that it’s perfectly possible for either swarms of new irrational investors outside the market to temporarily prop up the price of a stock, or for large amounts of insider investors to suddenly *become* irrational because of some environmental change that the rest of the market doesn’t have the ability to account for. Those irrational people might be cycled out, but maybe not before some new irrational traders move in, etc. If this process is predictable, then certain investors might be able to guarantee long-running above-average returns simply by taking advantage of these new investors ritualistically.
Two: Just because what is being done in the stock market is “stock trading” doesn’t mean that the kinds of people who are successful in one region, or socioeconomic climate, or industry are going to be successful in all trading environments. Predicting which companies are going to pay the most dividends has turned out to be a very general problem, partly because analysts have gotten so good at it, but overfitting is still an issue. In the 50s, it was probably important for investors to have a steady hand, be somewhat naturally rational, and maybe quick at mental math. Now, you just have to be a top 0.001% data scientist. The good traders in both groups of stock analysts have to be very intelligent, but there are also probably non-overlapping traits that one group might possess and not the other. I doubt the data scientists Renaissance Technologies has today are as calm under pressure as they’d need to be if they were making trades by hand instead of solving the more abstract problem of building the model that implies arbitrage opportunities.
I think part of the reason that COVID-19 blindsided the market so hard was the effects of #2. The 50s-era stock traders don’t control most of the capital anymore. And I may be underestimating how good they are, but I think most quantitative trading firms were just unable to anticipate an event like COVID-19 because Goldman developed an adaption that said “stop trading based off expert opinion”. That adaption worked to filter out competing firms for a decade, but then it failed this year in a way that seemed bewildering to rationalists, because nobody is old enough to think to make a pandemic-modeling algorithm.
Obviously, if these meta-trends can be predicted, then someone will get good at meta-finance and pre-emptively stock their staff with quants in 1990 and temporarily hire new workers for Q1 2020. The existence of any actor that is completely competent in all variations of finance will eventually solve finance. But if some aspects of them can’t be, then this could be an inherently limiting part of the sectors’ effectiveness.
I actually don’t really know how to think about the question of whether or not the 2016 election was stolen. Our sensemaking institutions would say it wasn’t stolen if it was, and it wasn’t stolen if it wasn’t.
But the prediction markets provide some evidence! Where are all of the election truthers betting against Trump?
If we can imagine medianworlds in which the average person on Earth would be considered extremely stupid, we can also imagine medianworlds in which the average person on Earth is extremely poorly-put-together, in the same sense that someone on the internet might be aghast at the self destructive behavior of Christian Weston Chandler or BossmanJack. In such a world there’d be an everyman Joe Bauers who livestreams their life to wide ridicule for their inability to follow a diet or go to sleep on time.
I think most observers are underestimating how popular Nick Fuentes will be in about a year among conservatives. Would love to operationalize this belief and create some manifold markets about it. Some ideas:
Will Nick Fuentes have over 1,000,000 Twitter followers by 2025?
Will Nick Fuentes have a public debate with [any of Ben Shapiro/Charlie Kirk/etc.] by 2026?
Will Nick Fuentes have another public meeting with a national level politician (I.e. congressman or above) by 2026?
Will any national level politicians endorse Nick Fuentes’ content or claim they are a fan of his by 2026?
A common gambit: during a prisoner’s dilemma, signal (or simply let others find out) that you’re about to defect. Watch as your counterparty adopts newly hostile rhetoric, defensive measures, or begins to defect themselves. Then, after you ultimately do defect, say that it was a preemptive strike against forces that might take advantage of your good nature, pointing to the recent evidence.
Simple fictional example: In Star Wars Episode III, Palpatine’s plot to overthrow the Senate is discovered by the Jedi. They attempt to kill him, to prevent him from doing this. Later, their attempt to kill Palpatine is used as the justification for Palpatine’s extermination of the rest of the Jedi and taking control of the Republic.
Actual historical example: By 1941, it was kind of obvious that the Nazis were going to invade Russia, at least in retrospect. Hitler had written in Mein Kampf that it was the logical place to steal lebensraum, and by that point the Soviet Union was basically the only European front left. Thus it was also not inconceivable that the Soviet Union would attack first, if Stalin were left to his own devices—and Stalin was in fact preparing for a war. So Hitler invaded, and then said (possibly accurately!) that Russia was eventually going to do it to Germany anyways.
Claude seems noticably and usefully smarter than GPT-4; it’s succeeding at helping me at previous writing and programming tasks that I couldn’t before. However, it’s hard to tell how much the improvement is the model itself being more intelligent, vs. Claude being much less subjected to intense copywritization RLHF.
SPY calls expiring in December 2026 at strike prices of +30/40/50% are extremely underpriced. I would allocate a small portion of my portfolio to them as a form of slow takeoff insurance, with the expectation that they expire worthless.
People have a bias toward paranoid interpretations of events, in order to encourage the people around them not to engage in suspicious activity. This affects how people react to e.g. government action outside of their own personal relationships, not necessarily in negative ways.
Dictators who start by claiming impending QoL and economic growth and then switch focus to their nation’s “culture” are like the political equivalent of hedge funds that start out doing quant stuff and then eventually switch to news trading on Elon Musk crypto tweets when that turns out to get really hard.
I’d analogize it more to traders who make money during a bull market, except in this case the bull market is ‘industrialization’. Yeah, turns out even a dictator like Stalin or Xi can look like ‘a great leader’ who has ‘mastered the currents of history’ and refuted liberal democracy—well, until they run out of industrialization & catchup growth, anyway.
Postmodernism and metamodernism are tools for making sure the audience knows how self aware the writer of a movie is. Audiences require this acknowledgement in order to enjoy a movie, and will assume the writer is stupid if they do not get it.
The most common refrain I hear against the possibility of widespread voter fraud is that demographers and pollsters would catch such malfeasance, but in practice when pollsters see a discrepancy between voting results and polls they seem to just assume the polls were biased. Is there a better reason besides “the FBI seems pretty competent”?
This is another case of “people arguing about scope of a fuzzy problem RATHER than how to define/measure the problem or analyze cost/benefit of mitigations”. Almost everyone deeply involved in this has a political/culture-war preference, and it seems to be the case that proposed changes seem to shift results in one direction or another, SEPARATELY from whether it reduces fraud.
In fact, it’s ludicrous to believe that zero fraud happens, as it’s ludicrous to believe that most outcomes are driven by fraud (as opposed to non-fraudulent bullshit reasons like advertising and vote friction). Most anti-fraud proposals ALSO raise barriers to technically-non-fraudulent-but-distasteful-to-some participation, and without being willing to discuss numbers and impact, there can be no resolution.
To your actual question, I believe that watchers would notice very extreme cases of fraud at the state and national levels, though they likely miss some at local levels (where natural variance is much more possible), and they probably can’t detect (and won’t have sufficient evidence to convince anyone) minor or incremental cases of fraud or illegal manipulation.
Controversially, an honest statistician will acknowledge that there’s lots of noise in the methodology. And honest democracy-proponents acknowledge that close races are … close and it’s not too critical for legitimacy which side wins the coinflip. So even if fraud or biased rulings change an outcome, if it’s hard to detect, it probably doesn’t matter.
I think a similar type of financial fraud is often detectable via violations of Benford’s law. Or more generally, it’s hard to fake the right distribution. As another case of that principle, you’d expect the discrepancy between polls and results to fall within a predictable distribution if they were sampling from the same space.
But would pollsters actually, in real life detect an odd discrepancy between one district and another and loudly proclaim it as voter fraud? Do we even know if such irrelegularities have happened before?
Maybe? I was not trying to answer the object level question either way, but instead just pointing out what sort of evidence there might be that could answer this.
Probably even worse than that: given any AGI spam detector, there is probably an AGI of similar capability that can generate spam indistinguishable from non-spam text.
Really powerful AGIs can probably generate spam that looks even more like things you want to read (but lead you into a conversion funnel) than actual things you want to read.
I remember reading about a nonprofit/company that was doing summer internships for alignment researchers. I thought it was Redwood Research, but apparently they are not hiring. Does anybody know which one I’m thinking of?
“They launched a long-range missile,” General John Hyten, the outgoing vice chairman of the Joint Chiefs of Staff told CBS News. “It went around the world, dropped off a hypersonic glide vehicle that glided all the way back to China, that impacted a target in China.”
When asked if the missile hit the target, Hyten said, “Close enough.”
For this april fools we should do the points thing again, but not award any money, just have a giant leaderboard/gamification system and see what the effects are.
I think Jim Babcock suggested having a leaderboard on every tag page, for who has the most points in that tag. So there’s lots of different ladders to climb and be the leader of!
This book is required reading for anyone claiming that explaining the AI X-risk thesis to normies is really easy, because they “did it to Mom/Friend/Uber driver”:
“The test of sanity is not the normality of the method but the reasonableness of the discovery. If Newton had been informed by [the ghost of] Pythagoras that the moon was made of green cheese, then Newton would have been locked up. Gravitation, being a reasoned hypothesis which fitted remarkably well into the Copernican version of the observed physical facts of the universe, established Newton’s reputation for extraordinary intelligence, and would have done so no matter how fantastically he arrived at it. Yet his theory of gravitation is not so impressive a mental feat as his astounding chronology, which establishes himself as the king of mental conjurers, but a Bedlamite king whose authority no one now accepts. On the subject of the eleventh horn of the beast seen by the prophet Daniel he was more fantastic than Joan, because his imagination was not limited by dramatics but mathematical, and was therefore extremely susceptible to numbers: indeed if all his works were lost except his chronology we should say that he was as mad as a hatter. As it is, who dares diagnose Newton as a madman?”
Making science fiction novels or movies to tell everyone about the bad consequences of a potential technology seems completely counterproductive, in retrospect:
Second, because all attempts to prepare for the advent of said technology are then shot down with: “Oh, like in ${X}? What is this, a science fiction novel?”
This makes me sad. Season one and the first 2⁄3 of season 2 were transformative and amazing for me and my nerdy college-age-at-the-time peer group. The end of that season and the followup movies were rather less so. I intellectually understand that it’s no longer innovative or particularly interesting, and it hasn’t aged very well either in terms of investigative technology nor mountain-town isolation and creepiness. Being local and contemporary likely helped a whole lot as well. Still, I visit Snoqualmie Falls and have brunch at the lodge there a few times a year, and the connection to twin peaks makes me smile a bit wider than just the beauty and power of nature would.
Anyway, I look forward to hearing a review from your perspective if you decide to stick with it.
The first three episodes of Narcos: Mexico, Season 3, is some of the best television I have ever seen. The rest of the “Narcos” series is middling to bad and I barely tolerate it. So far I would encourage you to skip to this season.
The “cognition is computation” hypothesis remains mysterious. How granular do the time steps have to be in my sim before someone starts feeling something? Do I have to run the sim forward at planck intervals in order to produce qualitative experience? Milliseconds? Minutes? Can you run the simulation backwards and get spooky inverse emotions or avoid qualia entirely that way?
That sounds more like “cognition is mysterious”, regardless of computation substrate. How do you think these things work in the brain? How many neurons or neural connections are needed to feel something? If you chemically speed up or slow down the signal propagation, is that still viable?
The answer is: we don’t know. The precise constraints and effects have not been tested, nor even explored enough to hypothesize. However, we DO know that a chemical processor in human heads has cognition (or at least mine does; I don’t want to overreach and assume yours or others’), and it’s very difficult to see why a digital computation COULDN’T have the same.
That’s not to say it’s automatic, nor that all computations are conscious. That part is unknown. Possibly unknowable (given I can’t even prove your consciousness to myself).
A small colony of humans is a genuinely tiny waste of paperclips. I am slightly more worried about the possibility that the acausal trade equilibrium cashes out to the AGI treating us badly because some aliens in a foreign Everett branch have some bizarre religious/moral opinions about the lives we ought to lead, than I am about being turned into squiggles.
Dogs and cats are not “aligned” to the degree that would be necessary to prevent a superintelligent dog from doing bad things. If tomorrow a new chew toy were released that made dogs capable of organizing to overthrow the government and start passing mandatory petting quotas, that would be a problem.
Your safest bet is to just arrange meeting her in a context where sex is a possibility (for example: “hey, do you want to go for coffee then stop at your place afterwards sometime?”). The desire to have sex isn’t something you can forecast far in advance, it can quickly change just like the weather.
You can have sexual conversation and establish the general desire for her to have you as a sexual partner. Essentially like saying she likes a particular restaurant but doesn’t schedule going there days or even hours in advance, she’s just open to going there when and if she feels the desire.
As far as how to be good at sexual talk in general, unfortunately it takes careful practice. You just have to risk being akward or turning her off (within reasonable limits, don’t immediately test saying something too crazy). Trial and error within reasonable bounds.
Lost a bunch of huge edits to one of my draft posts because my battery ran out. Just realizing that happened and now I can’t remember all the edits I made, just that they were good. :(
I wish there were a way I could spend money/resources to promote question posts in a way that counterbalanced the negative fact that they were already mostly shown by the algorithm to the optimal number of people.
I think the multi-hour computer hacking gauntlet probably trumps any considerations of account creation in terms of obstacles to new users. Just in considering things we could pare down. We also need some way to prevent computer hackers from scraping all of the exam boxes, and that means either being enormously creative or at some point requiring the creation of an account that we KYC.
I need a metacritic that adjusts for signaling on behalf of movie reviewers. So like if a movie is about race, it subtracts ten points, if it’s a comedy it adds 5, etc.
A strategy that may serve some of that purpose is to look at the delta between Rotten Tomatoes’ critic score (“Tomatometer”, looks like it means journalists) and audience score. Depending on your objective, maybe looking at the audience score by itself is ideal.
Interesting observation. I’m don’t have strong feelings about how fit rationalists are compared to the population. I have met various fit rationalists though.
A few come to mind in the local rationalist community in Portland.
I remember one guy on the LW Slack being really into weightlifting.
Over the years, somehow I’ve managed to meet up with four rationalists who are really into basketball. Perhaps because I’ve expressed interest in basketball in my writing. It still feels like a somewhat large coincidence though, given the small amount of LessWrongers who I presume to be into basketball. Anyway, each of these people have been extremely fit. 2-3 of them have played at the college level.
jefftk strikes me as being relatively fit. I vaguely recall posts indicating he spends a lot of time running, biking, dancing, and doing a decent amount of other outdoor activities. Similar with so8res.
Once upon a time I was pretty fit. Hopefully I can become fit again.
They value different things, but this is not uniformly less effort on any physical activity. More than one person from my very rationalist workplace climbs v8 boulders.
Is this intended to imply something about rationalists?
It says what it says. Obviously if there’s an actual trend it raises some questions, like whether or not rationalists just tend to care less about their health, or if intellectuals find it harder to come up with internal motivation for eating less. It does seem odd to me that rationalists would be more unhealthy than their general demographic given that being physically fit is a good instrumental goal for virtually everything.
I suspect there’s a True Scotsman argument embedded in the measurement behind this for one or more of “people you’ve met”, “physically fit” and “rationalist”.
I know of a number of people who are reasonably healthy and trim (but I don’t know if that’s “physically fit”), and who have heard of Eliezer Yudkowsky and at least some topics discussed on LW (but I don’t know if they are “rationalists”).
By “rationalist” I mean anybody LW-adjacent, that I’ve met at a meetup. By healthy I mean someone who looks like they have a BMI between 18 and 25, and exercises regularly.
And I actually need to revise: when I went to India I attended a LessWrong meetup, and there were many healthy people there. So this distinction is probably limited to American rationalists, of which I’m including myself as an unhealthy example; I have a BMI of about 30.
I am being absolutely literal about this: The Greater Forces Controlling Reality are constantly conspiring to teach me things. They try so hard. I almost feel bad for them.
LessWrong and “TPOT” is not the general public. They’re not even smart versions of the general public. An end to leftist preference falsification and sacred cows, if it does come, will not bring whatever brand of IQ realism you are probably hoping for. It will not mainstream Charles Murray or Garrett Jones. Far more simple, memetic, and popular among both white and nonwhite right wingers in the absence of social pressures against it is groyper-style antisemitism. That is just one example; it could be something stupider and more invigorating.
I’m not voting for either presidential candidate this year. I know my vote doesn’t mattter, but I don’t care. What we have is indistinguishable from soft authoritarianism, and I’d prefer not to lend any legitimacy to a “democracy” that gives me only two choices for President, one of whom is literally senile and cannot articulate his own policy positions on a podium.
Upset people do not vote to not “lend legitimacy”.
People who vote vote for authoritarian candidate.
Fast-forward 15 years, authoritarian candidate get “elected” for fifth time, opposition is de-facto illegal, political persecution level is higher than in any period of history except literal Stalin terror and Civil War.
When thinking on this, you seriously do not think that one candidate will be better than the other? Your world view doesn’t bring you to a view where one is even a slightly better candidate?
I think there’s a small expected value difference between the two candidates, but I am simply too disgusted to care. We need to overthrow the government or primary systems and replace it something that manages offer us people who are under the age of 75.
The Nazis often justified their actions by appealing to a God of Natural Selection. They alternatively suggested that victory of the superior races over the inferior was inevitable, and that opposing such a victory was an eternal sin. This is a contradiction—how can you oppose something if it’s an iron law of nature anyways—but the rhetorical flourish accomplishes two things:
First, it absolves the Nazis of any crimes they commit. They didn’t start the race war; they were just acting according to the will of Nature. Leaving the Polish alone would just be tolerating the existence of free energy that someone else will eventually pick up and use against them. The nazis are just the smart ones who made the first move instead of waiting around for others to do it.
Second, it uses a naturalism fallacy to redefine “good” as “following the Nazis’ local incentives”. If you say that acting according to your local incentives, i.e. crushing your weaker neighbors, is the Natural Thing and therefore Good, then that gives you permission to start a fight with whomever you want. You can do no wrong except lose, because the Gods will always ensure that the stronger and therefore better population will win.
In this sense, the “Thermodynamic God” stuff is kind of generalized Nazism. I’m not saying that people who believe it are Nazis—they’re not consistent enough in their application of that ideology to go that far—but apply the “free energy” justification to both obviously antisocial games in addition to prosocial ones and you see that it justifies both war and trade.
Man, I keep preventing myself from saying “parts of e/acc are at least somewhat fascist”, because that’s not a very useful thing to say in discourse, and prevents Thermodynamic-God-ism (or e/acc in general) from developing into a more sophisticated & fleshed out system of thought. I think this kind of post works if it’s phrased in a way that encourages more cooperation in the future (with the risk of this running a “cooperate with defection-rock” situation), but it only works if it encourages such cooperation.
I prefer the term “fascist” to “national socialist” because nazism was a really specific thing, bound to German conceptions of race at the time. Although—”fascism” is also really associated with Italy at that specific time.
I think the more general problem is violation of Hume’s guillotine. You can’t take a fact about natural selection (or really about anything) and go from that to moral reasoning without some pre-existing morals.
However, it seems the actual reasoning with the Thermodynamic God is just post-hoc reasoning. Some people just really want to accelerate and then make up philosophical reasons to believe what they believe. It’s important to be careful to criticize actual reasoning and not post-hoc reasoning. I don’t think the Thermodynamic God was invented and then people invented accelerationism to fulfill it. It was precisely the other way around. One should not critique the made up stuff (besides just critiquing that it is made up) because that is not charitable (very uncertain on this). Instead, one should look for the actual motivation to accelerate and then criticize that (or find flaws in it).
The “thermodynamic god” is a very weak force, as evidenced by the approximate age of the universe and no AI foom in Sol or in reach of our telescopes. It’s technically correct but who’s to say it won’t take 140 billion more years to AI foom?
It’s a terrible argument.
What bothers me is if you talk about competing human groups, whether at the individual, company level or country level or superpower block level, all the arrows point to acceleration.
(0) Individual level : nature sabotaged your genes. You can hope for AI advances leading to biotech advances and substantial life extension for yourself or your direct family. (Children, grandchildren—humans you will directly live to see). Death is otherwise your fate.
(1) Company level : accelerate AI (either as an AI lab or end user adopter) and get mountains of investment capital and money you saved via using AI tooling, or go broke
(2) Country level : get strapped with AI weapons (like drones with onboard intelligence manufactured by intelligent robots) or your enemies can annihilate you at low cost on the battlefield.
(3) Power bloc level. Fall behind enough, and you or your allies nuclear weapons may no longer be a sufficient deterrent. MAD ends if a side uses AI driven robots to make anti ballistic missile and air defense weapons in the quantities needed to win a nuclear war.
These forces seem shockingly strong and we know from the recent financial activity for Nvidia stock it’s trillions in favor of acceleration.
Thermodynamics is by comparison negligible.
I currently suspect due to 0 through 3 we are locked into a race for AI and have no alternatives, but it’s really weird the e/acc makes such an overtly bad argument when they are likely overall correct.
Thus Nietzsche thinks utilitarians are committed to ensuring the survival and happiness of human beings, yet they fail to grasp the unsavory consequences which that commitment may entail. In particular, utilitarians tend to ignore the fact that effective long-run utility promotion might require the forcible destruction of people who either enfeeble the gene pool or who have trouble converting resources into utility—incurable depressives, the severely handicapped, and exceptionally fastidious people all seem potential targets.
Why wouldn’t utilitarianism just weigh the human costs of those measures against proposed benefit of “improving the gene pool” and alternative possible remedies, like anything else?
Probably because from the outset, only one sort of answer is inside the realm of acceptable answers. Anything else would be far outside the Overton window. If they already know what sort of answer they have to produce, doing the actual calculations has no benefit. It’s like a theologian evaluating arguments about the existence of God.
Ok, then that sounds like a criticism of utilitarians, or maybe people, and not utilitarianism. Also, my point didn’t even mention utilitarianism, so what does that have to do with the above?
I saw Eliezer Yudkowsky at a grocery store in Los Angeles yesterday. I told him how cool it was to meet him in person, but I didn’t want to be a douche and bother him and ask him for photos or anything.
He said, “Oh, like you’re doing now?”
I was taken aback, and all I could say was “Huh?” but he kept cutting me off and going “huh? huh? huh?” and closing his hand shut in front of my face. I walked away and continued with my shopping, and I heard him chuckle as I walked off.
When I came to pay for my stuff up front I saw him trying to walk out the doors with like fifteen Milky Ways in his hands without paying. The girl at the counter was very nice about it and professional, and was like “Sir, you need to pay for those first.” At first he kept pretending to be tired and not hear her, but eventually turned back around and brought them to the counter.
When she took one of the bars and started scanning it multiple times, he stopped her and told her to scan them each individually “to prevent any electrical infetterence,” and then turned around and winked at me. I don’t even think that’s a word. After she scanned each bar and put them in a bag and started to say the price, he kept interrupting her by yawning really loudly
It is both absurd, and intolerably infuriating, just how many people on this forum think it’s acceptable to claim they have figured out how qualia/consciousness works, and also not explain how one would go about making my laptop experience an emotion like ‘nostalgia’, or present their framework for enumerating the set of all possible qualitative experiences[1]. When it comes to this particular subject, rationalists are like crackpot physicists with a pet theory of everything, except rationalists go “Huh? Gravity?” when you ask them to explain how their theory predicts gravity, and then start arguing with you about gravity needing to be something explained by a theory of everything. You people make me want to punch my drywall sometimes.
For the record: the purpose of having a “theory of consciousness” is so it can tell us which blobs of matter feel particular things under which specific circumstances, and teach others how to make new blobs of matter that feel particular things. Down to the level of having a field of AI anaesthesiology. If your theory of consciousness does not do this, perhaps because the sum total of your brilliant insights are “systems feel ‘things’ when they’re, y’know, smart, and have goals. Like humans!”, then you have embarassingly missed the mark.
(Including the ones not experienced by humans naturally, and/or only accessible via narcotics, and/or involve senses humans do not have or have just happened not to be produced in the animal kingdom)
Strongly agree. If you want to explain qualia, explain how to create experiences, explain how each experience relates to all other experiences.
I think Eliezer should’ve talked more about this in The Fun Theory Sequence. Because properties of qualia is a more fundamental topic than “fun”.
And I believe that knowledge about qualia may be one of the most fundamental types of knowledge. I.e. potentially more fundamental than math and physics.
I think Eliezer just straight up tends not to acknowledge that people sometimes genuinely care about their internal experiences, independent of the outside world, terminally. Certainly, there are people who care about things that are not that, but Eliezer often writes as if people can’t care about the qualia—that they must value video games or science instead of the pleasure derived from video games or science.
His theory of fun is thus mostly a description of how to build a utopia for humans who find it unacceptable to “cheat” by using subdermal space heroin implants. That’s valuable for him and people like him, but if aligned AGI gets here I will just tell it to reconfigure my brain not to feel bored, instead of trying to reconfigure the entire universe in an attempt to make monkey brain compatible with it. I sorta consider that preference a lucky fact about myself, which will allow me to experience significantly more positive and exotic emotions throughout the far future, if it goes well, than the people who insist they must only feel satisfied after literally eating hamburgers or reading jokes they haven’t read before.
This is probably part of why I feel more urgency in getting an actually useful theory of qualitative experience than most LW users.
Utilitarianism seems to demand such a theory of qualitative experience, but this requires affirming the reality of first-person experience. Apparently, some people here would rather stick their hand on a hot stove than be accused of “dualism” (whatever that means) and will assure you that their sensation of burning is an illusion. Their solution is to change the evidence to fit the theory.
It does if you’re one of the Cool People like me who wants to optimize their qualitative experience, but you can build systems that optimize some other utility target. So this isn’t really quite true.
This is true.
I’m interested in qualia for different reasons:
For me personalities of other people are an important type of qualia. I don’t consider knowing someone’s personality to be a simple knowledge like “mitochondria is the powerhouse of the cell”. So, valuing other people makes me interested in qualia more.
I’m interested in knowing properties of qualia (such as ways to enumerate qualia), not necessarily using them for “cheating” or anything. I.e. I’m interested in the knowledge itself.
Personalities aren’t really qualia as I’m defining them. They’re an aggregation of a lot of information about people’s behavior/preferences. Qualia is things people feel/experience.
Would you consider the meaning of a word (at least in a specific context) to be qualia? For me personalities are more or less holistic experiences, not (only) “models” of people or lists of arbitrary facts about a person. I mean, some sort of qualia should be associated with those “models”/facts anyway? People who experience synesthesia may experience specific qualia related to people.
Maybe it’s wishful thinking, but I think it would be cool if awareness about other conscious beings was important for conscious experience.
Seems weird for your blob of matter to react so emotionally to the sounds or shapes that some blobs have emitted bout other blobs. Why would you expect anyone to have a coherent theory of something they can’t even define and measure?
It seems even weirder for you to take such reporting at face value about having any relation to a given blob’s “inner life”, as opposed to a variance in the the evolved and learned verbal and nonverbal signaling that such behaviors actually are.
Because they say so. The problem then is why they think they have a coherent theory of something they can’t define or measure.
Just the way I am bro
I expect people who say they have a coherent theory of something to be able to answer any relevant questions at all about that something.
Are you referring the NYPost link? I think people’s verbal and nonverbal signaling has some relationship with their inner experience. I don’t think this woman is forgoing anaesthetic during surgeries because of pathologies.
But if you disagree, then fine: How do we modify people to have the inner life that that woman is ~pretending to have?
Probably should have included a smiley in my comment, but I do want to point out that it’s reasonable to model people (and animals and maybe rocks) as having highly variant and opaque “inner lives” that bear only a middling correlation to their observable behaviors, and especially to their public behaviors.
For the article on the woman who doesn’t experience pain, I have pretty high credence that there is some truth to her statements, but much lower credence that it maps as simply as presented to “natural stoicism” as presented in the article. And really no clue on “what it’s like” to live that experience, whether it’s less intense and interesting in all dimensions, or just mutes the worst of it, or is … alien.
And since I have no clue how to view or measure an inner life, I have even less understanding of how or whether to manipulate it. I strongly suspect we could make many people have an outer life (which includes talking about one’s inner life) more like the one given, with the right mix of drugs, genetic meddling, and repeated early reinforcement of expectations.
Agreed, basically. That’s part of why we need the theory!
On this forum, or literally everywhere? Because for example I keep seeing people arguing with absolute conviction, even in academic papers, that current AIs and computers can’t possibly be conscious and I can’t figure out how they could ever know that of something that is fundamentally unfalsifiable. I envy their secret knowledge of the world gained by revelation, I guess!
Huh, interesting. Could you make some examples for what people seem to claim this, and if Eliezer is among them, where he seems to claim this? (Would just interest me.)
Attentional Schema Theory. That’s the convincing one. But still very rudimentary.
But you know if something is poorly understood. The guy who thought it up has a section in his book on how to make a computer have conscious experiences.
But any theory is incomplete as the brain is not well understood. I don’t think you can expect a fully formed theory right off the bat, with complete instructions for making a feeling thinking conscious We aren’t there yet.
I’m actually cool with proposing incomplete theories. I’m just annoyed with people declaring the problem solved via appeals to “reductionism” or something, without even suggesting that they’ve thought about answering these questions.
The Prediction Market Discord Message, by Eva_:
I’m pretty interested in this as an exercise of ‘okay yep a bunch of those problems seem real. Can we make conceptual or mechanism-design progress on them in like an afternoon of thought?’
I’m interested too. I think several of the above are solvable issues. AFAICT:
Solved by simple modifications to markets:
Races to correct naive bidders
Defending the true price from incorrect bidders for $ w/o letting price shift
Seem doable with thought:
Billing for information value
Policy conditionals
Seem hard/idk if it’s possible to fully solve:
Collating information known by different bidders
Preventing tricking other bidders for profit
General enterprise of credit allocation for knowledge creation
Good post, it’s underappreciated that a society of ideally rational people wouldn’t have unsubsidized, real-money prediction markets.
Of course in real prediction markets this is exactly what we see. Maybe you could think of PMs as they exist not as something that would exist in an equilibrium of ideally rational agents, but as a method of moving our society closer to such an equilibrium, subsidized by the bets of systematically irrational people. It’s not a perfect such method, but does have the advantage of simplicity. How many of these issues could be solved by subsidizing markets?
What discord is this, sounds cool.
[Redacted]
<3. Thanks for letting us know.
That’s very brave
What happened to lc? They contributed to some good discussions on here, but look to have suddenly disappeared.
To the LW devs—just want to mention that this website is probably now the most well designed forum I have ever used. The UX is almost addictively good and I’ve been loving all of the little improvements over the past year or so.
Ditto here; kudos to everyone involved in creating such an excellent forum design!
I find LW.com hard to use (because I have yet to find a way to disable the mouseovers, which quickly deplete my orienting response, about which I can explain more if asked) but LW is better than most sites in that alternative interfaces can be created. In particular, I use greaterwrong.com as my interface and am pretty satisfied with it (though it was slow for a lot of the last 2 months).
But I strongly upvoted parent because it is good reminder to me of the cognitive diversity in the human population.
I wrote somewhere that this is the only forum that looks to me the result of Intelligent Design, and not an accident. It’s the only one that looks like I AM trying to intelligently design the forum MYSELF, including going back in time after discovering problems and fixing them (or just thinking in advance for five minutes on each aspect “how can I hack this / what are the vulnerabilities of this system of rules /how trolls can use it”). The point is not only that, unlike many other sites, I don’t think every five minutes “why can’t X be here”, the point is that I look somewhere and see in advance that something is provided that I don’t I had time to think, and some kind of protection was made against the exploitation of vulnerabilities in the system of rules or even involuntary errors in human psychology.
I agree completely. The last N weeks or so there have been performance problems, but all of the little things… Version history on posts, strong upvotes/downvotes, restoration of comments… They make writing things just fun.
If we still talk about shortcomings, then I would still be able to name 4, I wrote about the first two in the questions: lack of arrows between sequences; useless SEQ RERUNs for the sake of comments and problems with missing nested answers to questions in old comments; lately performance problems (which turned out to be lesswrong problems, not mine, so I didn’t count them before); the fact that from time to time they vote for you once in the minus for completely incomprehensible reasons and then this value does not return to the plus (but as far as I understand, setting the need to indicate the reason for a bad vote will be either harmful or not very useful measure). But considering that for all this time I have found the number of minuses that can be counted on the fingers of one hand, while on any other site they literally pour from every element every second like from a cornucopia and instead of eliminating them, monthly useless graphic updates are made .. All in all, this is just a surprisingly good result, although (there is no limit to perfection) I hope at least three of them will be fixed in the coming months (how about the last one I do not know and there seems to be reasons why it is not fixed. But just in case I I note this minus, otherwise, as in the case of glitches, it turns out recently that everyone simply did not report it).
The Nick Bostrom fiasco is instructive: never make public apologies to an outrage machine. If Nick had just ignored whoever it was trying to blackmail him, it would have been on them to assert the importance of a twenty-five year old deliberately provocative email, and things might not have ascended to the point of mild drama. When he tried to “get ahead of things” by issuing an apology, he ceded that the email was in fact socially significant despite its age, and that he did in fact have something to apologize for, and so opened himself up to the Standard Replies that the apology is not genuine, he’s secretly evil etc. etc.
Instead, if you are ever put in this situation, just say nothing. Don’t try to defend yourself. Definitely don’t volunteer for a struggle session.
Treat outrage artists like the police. You do not prevent the police from filing charges against you by driving to the station and attempting to “explain yourself” to detectives, or by writing and publishing a letter explaining how sorry you are. At best you will inflate the airtime of the controversy by responding to it, at worst you’ll be creating the controversy in the first place.
Do not assume good faith on Twitter. Ever.
Not because all people online are bad, but because Twitter is a “dark forest”. If there are 999 good people and 1 bad person, it’s the bad person who will take your tweet, maybe modify it a little, put it into most outrageous possible context, write an article about why you are the worst person ever, and share it on all social networks. And that’s the lucky case. In the unlucky case, the story will uncritically be accepted by journalists, then added to Wikipedia, you will get fired, and for the rest of your life, random people on the street will keep yelling at you.
Twitter should be legally required to show you this as a warning every time you are making a tweet.
EDIT:
This was written before I learned the details. Now the analogy with not talking to police seems even better: indeed, every word you say is a potential new incriminating evidence against you (and if it is not, it will simply be ignored), and the worst outcome is that the new evidence will hurt you in a way the old evidence could not.
Question: If I get in trouble with the police, I know I need to find a lawyer. If I get in trouble with an internet mob, and I understand the need to defer to a more experienced person’s advice to navigate the minefield, and I am willing to pay them, whose services exactly should I find? Is there an obvious answer, such as “lawyer” in case of legal trouble?
The professional class would be PR people. A vaguely remember reading that the firm that handled Biden’s sexual assault allegations also did good work for other people.
I actually thought of this extension and cut it from the original post, but, if you need to defend yourself and have simple exonerating evidence, one way might be to find a friend willing to state your reservations without referring to the fact that they’ve spoken to you or you’re feeding them information. This way they can present your side of the story without giving it extra fuel, lending significance to the charges, or directly quoting you with statements you can be hanged for by the Twitter mob.
However, this may also just extend the half life of the controversy.
Very strongly agree and endorse this message.
I’m not above giving into incentives, and if the incentives are such that you should not apologise for wrongdoing, then so be it.
Does this generalize to “just ignore Twitter (and other blathering by “the masses”) for most things”? Outside of a pretty small group, I haven’t heard much handwringing, condemnation nor defense of Bostrom’s old messages or his recent apology.
I personally think that personal honor is better supported by a thoughtful apology when something is brought to one’s attention, than by simply ignoring it. Don’t engage in a back-and-forth, and don’t expect the apology to convince the more vocal part of the ’verse. But do be honest and forthright with yourself and those who you respect enough to value their opinions.
From what I can tell (and I haven’t looked that deeply, as I don’t particularly care), Professor Bostrom has done this pretty well, and I don’t expect him to suffer much long-term harm from his early mistakes.
IMO I disagree with the implication that Nick Bostrom shouldn’t have apologized, since for once the Twitter machine is actually right to criticize the apology.
From titotal’s post on why Bostrom’s apology isn’t good, there are several tests that he failed at:
Link below:
https://forum.effectivealtruism.org/posts/KB8XPfh7dJ9uJaaDs/does-ea-understand-how-to-apologize-for-things
Disclaimer: This is a rare action for me to take, and just because I think the Twitter sphere is somewhat right does not equal that any of their conclusions are automatically right, nor does this mean I will care much about what Twitter thinks.
The problem with trade agreements as a tool for maintaining peace is that they only provide an intellectual and economic reason for maintaining good relations between countries, not an emotional once. People’s opinions on war rarely stem from economic self interest. Policymakers know about the benefits and (sometimes) take them into account, but important trade doesn’t make regular Americans grateful to the Chinese for providing them with so many cheap goods—much the opposite, in fact. The number of people who end up interacting with Chinese people or intuitively understanding the benefits firsthand as a result of expanded business opportunities is very small.
On the other hand, video games, social media, and the internet have probably done more to make Americans feel aligned with the other NATO countries than any trade agreement ever. The YouTubers and Twitch streamers I have pseudosocial relationships with are something like 35% Europeans. I thought Canadians spoke Canadian and Canada was basically some big hippie commune right up until my minecraft server got populated with them. In some weird alternate universe where people are suggesting we invade Canada, my first instinctual thought wouldn’t be the economic impact on free trade, it would be whether my old steam friend Forbsey was OK.
I mean, just imagine if Pewdiepie were Ukrainian. Or worse, some hospital he was in got bombed and he lost an arm or a leg. You wouldn’t have to wait for America to initiate a draft, a hundred thousand volunteers would be carving a path from Odessa to Moscow right now.
If I were God-Emperor and I wanted to calm U.S.-China relations, my first actions would be to make it really easy for Chinese people to get visas, or even subsidize their travel. Or subsidize Mandarin learning. Or subsidize Google translate & related applications. Or really mulligan hard for our social media companies to get access to the chinese market.
It would not be to expand free trade. Political hacks find it exceptionally easy to turn simple trade into some economic boogeyman story. Actually meeting and interacting with the people from that country, having shared media, etc., makes it harder to inflame tensions.
This doesn’t seem like an either-or question. Freer trade and more individual interactions seem complementary to me.
I should note that I’m also pro free trade, because I like money and helping people. I’m just not pro free trade because I think it promotes peace.
The “people are wonderful” bias is so pernicious and widespread I’ve never actually seen it articulated in detail or argued for. I think most people greatly underestimate the size of this bias, and assume opinions either way are a form of mind-projection fallacy on the part of nice/evil people. In fact, it looks to me like this skew is the deeper origin of a lot of other biases, including the just-world fallacy, and the cause of a lot of default contentment with a lot of our institutions of science, government, etc. You could call it a meta-bias that causes the Hansonian stuff to go largely unnoticed.
I would be willing to pay someone to help draft a LessWrong post for me about this; I think it’s important but my writing skills are lacking.
I haven’t seen it articulated, or even mentioned. What is it? It sounds like this is just the common amnesia (or denial) of the rampant hypocrisy in most humans, but I’ve not heard that phrasing.
would it be fair to replace the first “are” (and maybe the second) with something that doesn’t imply essentialism or identity? “people are assumed to be” or “people claim to be” followed by “more altruistic than their behavior exhibits”?
The most salient example of the bias I can think of comes from reading interviews/books about the people who worked in the extermination camps in the holocaust. In my personal opinion, all the evidence points to them being literally normal people, representative of the average police officer or civil service member pre-1931. Holocaust historians nevertheless typically try very hard to outline some way in which Franz Stangl and crew were specially selected for lack of empathy, instead of raising the more obvious hypothesis that the median person is just not that upset by murdering strangers in a mildly indirected way, because the wonderful-humans bias demands a different conclusion.
This goes double in general for the entire public conception of killing as the most evil-feeling thing that humans can do, contrasted with actual memoirs of soldiers and the like who typically state that they were surprised how little they cared compared to the time they lied to their grandmother or whatever.
I may have the same bias, and may in fact believe it’s not a bias. People are highly mutable and contextual in how they perceive others, especially strangers, especially when they’re framed as outgroup.
The fact that a LOT of people could be killers and torturers in the right (or very wrong) circumstances doesn’t seem surprising to me, and this doesn’t contradict my beliefs that many or perhaps most do genuinely care about others with a better framing and circumstances.
There is certainly a selection effect, likewise for modern criminal-related work, that people with the ability to frame “otherness” and some individual-power drive, tend to be drawn to it. There are certainly lots of Germans who did not participate in those crimes, and lots of current humans who prefer to ignore the question of what violence is used against various subgroups*.
But there’s also a large dollop of “humans aren’t automatically ANYTHING”. They’re far more complex and reactive than a simple view can encompass.
* OH! that’s a bias that’s insanely common. I said “violence against subgroups” rather than “violence by individuals against individuals, motivated by membership and identification with different subgroups”.
Yeah, I echo this.
I’ve gone back and forth with myself about this sort of stuff. Are humans altruistic? Good? Evil?
On the one hand, yes, I think lc is right about how in some situations people exhibit just an extraordinary amount of altruism and sympathy. But on the other hand, there are other situations where people do the opposite: they’ll, I dunno, jump into a lake at a risk to their own life to save a drowning stranger. Or risk their lives running into a burning building to save strangers (lots of volunteers did this during 9/11).
I think the explanation is what Dagon is saying about how mutable and context-dependent people are. In some situations people will act extremely altruistically. In others they’ll act extremely selfishly.
The way that I like to think about this is in terms of “moral weight”. How many utilons to John Doe would it take for you to give up one utilon of your own? Like, would you trade 1 utilon of your own so that John Doe can get 100,000 utilons? 1,000? 100? 10? Answering these questions, you can come up with “moral weights” to assign to different types of people. But I think that people don’t really assign a moral weight and then act consistently. In some situations they’ll act as if their answer to my previous question is 100,000, and in other situations they’ll act like it’s 0.00001.
My model of utility (and the standard one, as far as I can tell) doesn’t work that way. No rational agent ever gives up a utilon—that is the thing they are maximizing. I think of it as “how many utilons do you get from thinking about John Doe’s increased satisfaction (not utilons, as you have no access to his, though you could say “inferred utilons”) compared to the direct utilons you would otherwise get”.
Those moral weights are “just” terms in your utility function.
And, since humans aren’t actually rational, and don’t have consistent utility functions, actions that imply moral weights are highly variable and contextual.
Ah yeah, that makes sense. I guess utility isn’t really the right term to use here.
Recommendations for such memoirs?
Not really memoirs but a German documentary about WWII might be of interest for you. Der unbekannte Soldat
I watched on Amazon Prime and you can still find the title there in a search, not sure if it is only available for rent/sale now or if you can stream with Prime membership.
I’m not sure to what extent this is helpful, or if it’s an example of the dynamic you’re refuting, but Duncan Sabien recently wrote a post that intersects with this topic:
Where it connects is that if someone sees [making the world a better place] like simply selecting a better Nash Equilibria, they absolutely will spend time exploring solutionspace/thinking through strategies similar to Goal Factoring or Babble and Prune. Lots of people throughout history have yearned for a better world in a lot of different ways, with varying awareness of the math behind Nash Equilibira, or the transhumanist and rationalist perspectives on civilization (e.g. map & territory & biases & scope insensitivity for rationalism, cryonics/anti-aging for transhumanism).
But their goal here is largely steering culture away from nihilism (since culture is a Nash Equilibria) which means steering many people away from themselves, or at least the selves that they would have been. Maybe that’s pretty minor in this case e.g. because feeling moderate amounts of empathy and living in a better society are both fun, but either way, changing a society requires changing people, and thinking really creatively about ways to change people tears down lots of chesterton-Schelling fences and it’s very easy to make really big damaging mistakes in the process (because you need to successfully predict and avoid all mistakes as part of the competent pruning process, and actually measurably consistently succeeding at this is thinkoomph not just creative intelligence).
Add in conflict theory to the mistake theory I’ve described here, factor in unevenly distributed intelligence and wealth in addition to unevenly distributed traits like empathy and ambition and suspicion-towards-outgroup (e.g. different combinations of all 5 variables), and you can imagine how conflict and resentment would accumulate on both sides over the course of generations. There’s tons of examples in addition to Ayn Rand and Wokeness.
It’s worth separating what people actually believe about how altruistic other people are from what they pretend to believe about the altruism of other people.
If you ask someone whether they believe that there’s a chance that their partner would cheat on them, they are most likely to tell you that their partner would cheat on them. The same person might take a few signs that point in the direction of their partner cheating as a huge problem.
I would also expect that beliefs differ a lot between people.
I’m not looking to write a post about this, but I’d be happy to go back and forth with you in the comments about it (no payment required). Maybe that back and forth will help you formulate your thoughts.
For starters, I’m not sure if I understand the bias that you are trying to point to. Is it that people assume others are more altruistic than they actually are? Do any examples come to your mind other than this?
Related: Saving the world sucks
People accept that being altruistic is good before actually thinking if they want to do it. And they also choose weird axioms for being altruistic that their intuitions may or may not agree with (like valuing the life of someone in the future the same amount of someone today).
“The x are more y than they actually are” seems like a contradiction?
Rewrote to be more clear.
So apparently in 2015 Sam Altman said:
Serious question: Is he a comic book supervillain? Is this world actually real? Why does this quote not garner an emotive reaction out of anybody but me?
I was surprised by this quote. On following the link, the sentence by itself seems noticeably out of context; here’s the next part:
PSA: I have realized very recently after extensive interactive online discussion with rationalists, that they are exceptionally good at arguing. Too good. Probably there’s some inadvertent pre- or post- selection for skill at debating high concept stuff going on.
Wait a bit until acceding to their position in a live discussion with them where you start by disagreeing strongly for maybe intuitive reasons and then suddenly find the ground shifting beneath your feet. It took me repeated interactions where I only later realized I’d been hoodwinked by faulty reasoning to notice the pattern.
I think in general believing something before you have intuition around it is unreliable or vulnerable to manipulation, even if there seems to be a good System 2 reason to do so. Such intuition is specialized common sense, and stepping outside common sense is stepping outside your goodhart scope where ability to reliably reason might break down.
So it doesn’t matter who you are arguing with, don’t believe something unless you understand it intuitively. Usually believing things is unnecessary regardless, it’s sufficient to understand them to make conclusions and learn more without committing to belief. And certainly it’s often useful to make decisions without committing to believe the premises on which the decisions rest, because some decisions don’t wait on the ratchet of epistemic rationality.
I’m on board with this. it’s a common failure of reasoning in this community and humanity in general imo—people believing each other too early because of confident sounding reasoning. I’ve learned to tell people I’ll get back to them after a few nights’ sleep when someone asks me what my update is about a heavily philosophical topic.
That’s a tricky thing: the method advocated in the Sequences is lightness of belief, which helps in changing your mind but also dismantles the immune system against nonsense, betting that with sufficient overall rationality training this gives a better equilibrium.
I think aiming for a single equilibrium is still inefficient use of capabilities and limitations of human mind, and it’s better to instead develop multiple segregated worldviews (something the Sequences explicitly argue against). Multiple worldviews are useful precisely to make the virtue of lightness harmless, encouraging swift change in details of a relevant worldview or formation of a new worldview if none account for new evidence. In the capacity of paradigms, some worldviews might even fail to recognize some forms of evidence as meaningful.
This gives worldviews opportunity to grow, to develop their own voice with full support of intuitive understanding expected in a zealot, without giving them any influence over your decisions or beliefs. Then, stepping back, some of them turn out to have a point, even if the original equilibrium of belief would’ve laughed their premises out of consideration before they had a chance of conveying their more nuanced non-strawman nature.
I feel like “what other people are telling me” is a very special type of evidence that needs to be handled with extra care. It is something that was generated by a potentially adversarial intelligence, so I need to check for some possible angles of attack first. This generally doesn’t need to be done with evidence that is just randomly thrown at me by the universe, or which I get as a result of my experiments. The difference is, basically, that the universe is only giving me the data, but a human is simultaneusly giving me the data (potentially filtered or falsified) and also some advice how to think about the data (potentially epistemically wrong).
Furthermore, there is a difference between “what I know” and “what I am aware of at this very moment”. There may be some problem with what the other person is telling me, but I may not necessarily notice it immediately. Especially when the person is drawing my attention away from that on purpose. So even if I do not see any problem with what that person said right now, I might notice a few problems after I sleep on it.
My own mind has all kinds of biases; how I evaluate someone’s words is colored by their perceived status, whether I feel threatened by them, etc. That is a reason to rethink the issue later when the person is not here.
In other words, if someone tells me a complex argument “A, therefore B, therefore C, therefore D, therefore you should give me all your money; in the name of Yudkowsky be a good rationalist and update immediately”, I am pretty sure that the rational reaction is to ignore them and take as much time as I need to rethink the issue alone or maybe with other people whom I trust.
By worldviews I mean more than specialized expertise where you don’t yet have the tools to get your head around how something unfamiliar works (like how someone new manipulates you, how to anticipate and counter this particular way of filtering of evidence). These could instead be unusual and currently unmotivated ways of looking at something familiar (how an old friend or your favorite trustworthy media source or historical truths you’ve known since childhood might be manipulating you; how a “crazy” person has a point).
The advantage is in removing the false dichotomy between keeping your current worldview and changing it towards a different worldview. By developing them separately, you take your time becoming competent in both, and don’t need to hesitate in being serious about engaging with a strange worldview on its own terms just because you don’t agree with it. But sure, getting more intuitively comfortable with something currently unfamiliar (and potentially dangerous) is a special case.
while I definitely see your argument, something about this seems weird to me and doesn’t feel likely to work properly. my intuition is that you just have one mashed worldview with inconsistent edges; while that’s not necessarily terrible or anything, and keeping multiple possible worldviews in mind is probably good, my sense is that “full support [as] expected in a zealot” is unhealthy for anyone. something or other overoptimization?
I do agree multiple worldviews discussing is an important thing in improving the sanity waterline.
It is weird in the sense that there is no widespread practice. The zealot thing is about taking beliefs-within-a-worldview (that are not your beliefs) seriously, biting the bullet, which is important for naturally developing any given worldview the way a believer in it would, not ignoring System 2 implications that challenge and refine preexisting intuition, making inferences according to its own principles and not your principles. Clearly even if you try you’ll fail badly at this, but you’ll fail even worse if you don’t try. With practice in a given worldview, this gets easier, an alien worldview obtains its own peculiar internal common sense, a necessary aspect of human understanding.
The named/distinct large worldviews is an oversimplification, mainly because it’s good to allow any strange claim or framing to have a chance of spinning up a new worldview around itself if none would take it as their own, and to merge worldviews as they develop enough machinery to become mutually intelligible. The simplification is sufficient to illustrate points such as a possibility of having contradictory “beliefs” about the same claim, or claims not being meaningful/relevant in some worldviews when they are in others, or taking seriously claims that would be clearly dangerous or silly to accept, or learning claims whose very meaning and not just veracity is extremely unclear.
Studying math looks like another important example, with understanding of different topics corresponding to worldviews where/while they remain sparsely connected, perhaps in want of an application to formulating something that is not yet math and might potentially admit many kinds of useful models. Less risk of wasting attention on nonsense, but quite a risk of wasting attention on topics that would never find a relevant application, were playing with math and building capacity to imagine more kinds of ideas not a goal in itself.
Note also that they may be taking positions which are selected for being easy to argue—they’re the ones they were convinced by, of course. Whether you think that has correlation with truth is up to you—I think so, but it’s not a perfect enough correlation for it to be enough.
I don’t know exactly what you mean by “acceding” to a position in a discussion—if you find the arguments strong, you should probably acknowledge that—this isn’t a battle, it’s a discussion. If you don’t find yourself actually convinced, you should state that too, even if your points of disagreement are somewhat illegible to yourself (intuition). And, of course, if you later figure out why you disagree, you can re-open the discussion next time it’s appropriate.
Interesting whitepill hidden inside Scott Alexander’s SB 1047 writeup was that lying doesn’t work as well as predicted in politics. It’s possible that if the opposition had lied less often, or we had lied more often, the bill would not have gotten a supermajority in the senate.
Where does he say this? (I skimmed and didn’t see it.)
Link here: https://www.astralcodexten.com/p/sb-1047-our-side-of-the-story
Many of you are probably wondering what you will do if/when you see a polar bear. There’s a Party Line, uncritically parroted by the internet and wildlife experts, that while you can charge/intimidate a black bear, polar bears are Obligate Carnivores and the only thing you can do is accept your fate.
I think this is nonsense. A potential polar bear attack can be defused just like a black bear attack. There are loads of youtube videos of people chasing Polar Bears away by making themselves seem big and aggressive, and I even found some indie documentaries of people who went to the arctic with expectations of being able to do this. The main trick seems to be to resist the urge to run away, make yourself look menacing, and commit to warning charges in the bear’s general direction until it leaves.
I can’t decide what the epistemic status of that post is, but in the same spirit, here’s how to tell the difference between a brown bear and a grizzly. Climb a tree. A brown bear will climb up after you and eat you, while a grizzly will knock down the tree and eat you.
I believe the intended message of the “fight back, lay down, goodnight” maxim is “Thou shalt not generalize your experience with black bears to grizzlies!” I don’t expect there is much danger of someone asking “if not friend, why friend shaped?” of polar bears; they just fill out the Rule of Three.
It’s a lot like the “red touches yellow, he’s a friendly fellow; red touches black, you’re dead, Jack” mnemonic for snakes: people are very likely to encounter the (relatively) harmless one, and you really want them not to learn the wrong lessons from that.
Your snake mnemonic is not the standard one and gives an incorrect, inverted result. Was that intentional?
This is a coral snake, which is dangerously venomous:
This is a king snake, which is totally harmless unless you’re a vole or something:
The mnemonic I’ve heard is “red and yellow, poisonous fellow; red and black, friend of Jack”
I guess the point of the official party line is to avoid kids going and trying to scare polar bears.
As opposed to other species of bear, which are safe for children to engage with?
Teddy bears.
Now that I think about it, maybe teddy bears teach our kids some really dangerous habits.
There are too many nonpolar bears in the US to keep up the lie.
Does anybody here have any strong reason to believe that the ML research community norm of “not taking AGI discussion seriously” stems from a different place than the oil industry’s norm of “not taking carbon dioxide emission discussion seriously”?
I’m genuinely split. I can think of one or two other reasons there’d be a consensus position of dismissiveness (preventing bikeshedding, for example), but at this point I’m not sure, and it affects how I talk to ML researchers.
“It is difficult to get a man to understand something, when his salary depends upon his not understanding it!”
Upton Sinclair
I’m not sure the “ML Research Community” is cohesive enough (nor, in fact, well-defined enough) to have very strong norms about this. Further, it’s not clear that there needs to be a “consensus reasoning” even if there is a norm—different members could have different reasons for not bringing it up, and once it’s established, it can be self-propagating: people don’t bring it up because their peers don’t bring it up.
I think if you’re looking for ways to talk to ML researchers, start small, and see what those particular researchers think and how they react to different approaches. If you find some that work, then expand it to more scalable talks to groups of researchers.
I don’t expect AI researchers to achieve AGI before they find one or more horrible uses for non-general AI tools, which may divert resources, or change priorities, or do something else which prevents true AGI from ever being developed.
Because of it’s low chance of existential risk or a singularity utopia. Here’s the thing, technologies are adopted first at a low level and at early adopters, then it becomes cheaper and better, than it more or less becomes very popular. No technology ever had the asymptotic growth or singularity that ML/AI advocates claim to have happened. So we should be very skeptical about any claims of existential risks.
On climate change, we both know it will be serious and that it is not an existential risk or a civilization collapse disaster.
I think best way to look at it is climate change way before it was mainstream
Lie detection technology is going mainstream. ClearSpeed is such an accuracy and ease of use improvement to polygraphs that various government LEO and military are starting to notice. In 2027 (edit: maybe more like 2029) it will be common knowledge that you can no longer lie to the police, and you should prepare for this eventuality if you haven’t.
I think it’s possible to beat such lie detectors by considering the question in such a way that you get the answer you want. “Did you kill that man?” “No” (mental framing: the knife killed him/he killed himself by annoying me/I’m a different person today/My name is not “you” so it’s technically false, etc)
I would bet that the hesitation caused by doing the mental reframe would be picked up by this.
The counter to this is, always take your time whether you need to or not.
Lie detection technology must be open sourced. It could fix literally everything. Just ask people “how much do you want to fix literally everything”, “how much did you think about ways to do better and avoid risk”, “do you have the skills for this position or think you can get them” etc, so many profoundly incredible things are downstream of finding and empowering the people who give good answers.
It’s AI-based, so my guess is that it uses a lot of somewhat superficial correlates that could be gamed. I expect that if it went mainstream it would be Goodharted.
I expect Goodhart would hit particularly bad if you were doing the kind of usage I guess you are implying, which is searching for a few very well selected people. A selective search is a strong optimization, and so Goodharts more.
More concrete example I have in mind, that maybe applies right now to the technology: there are people who are good at lying to themselves.
That’s not really the kind of usage I was thinking of; I was thinking of screening out low-honesty candidates from a pool who already qualified to join a high-trust system (which currently do not exist for any high-stakes matter). Large amounts of sensor (particularly from people lying and telling the truth during different kinds of interviews) will probably be necessary, but will need to focus on specific indicators of lying e.g. discomfort or heart rate changes or activity in certain parts of the brain, and extremely low false positive and false negative rages probably won’t be feasible.
Also, hopefully people would naturally set up multiple different tests for redundancy, each of which would have to be goodharted separately, and each false positive (case of a uniquely bad person being revealed as bad after passing the screening) would be added to the training data. Periodically re-testing people for the concealed emergence of low-trust tendencies would further facilitate this. Sadly, whenever a person slips through the cracks and lies and discovers they got away with it, they will know that they got away with it and continue doing it.
Do you have any source for the technology being an improvement in accuracy over polygraphs?
I’m not sure I can go into detail, but the 97% true positive (i.e. lie) detection rate cited on the website is accurate. More important, people who can administer polygraphs or know how they work can defeat polygraphs. These tests are apparently much more difficult to cheat, at least for now & while they’re proprietary.
Hey [anonymous]. I see you deactivated your account. Hope you’re okay! Happy to chat if you want on Signal at five one oh, nine nine eight, four seven seven one (also a +1 at the front for US country code).
(Follow-up: [anonymous] reached out, is doing fine.)
Pretty much ~everybody on the internet I can find talking about the issue both mischaracterizes and exaggerates the extent of child sex work inside the United States, often to a patently absurd degree. Wikipedia alone reports that there are anywhere from “100,000-1,000,000” child prostitutes in the U.S. There are only ~75 million children in the U.S., so I guess Wikipedia thinks it’s possible that more than 1% of people aged 0-17 are prostitutes. As in most cases, these numbers are sourced from “anti sex trafficking” organizations that, as far as I can tell, completely make them up.
Actual child sex workers—the kind that get arrested, because people don’t like child prostitution—are mostly children who pass themselves off as adults in order to make money. Part of the confusion comes from the fact that the government classifies any instance of child prostitution as human trafficking, regardless of whether or not there’s evidence the child was coerced. Thus, when the Department of Justice reports that federal law enforcement investigated “2,515 instances of suspected human trafficking” from 2008-2010, and that “forty percent involved prostitution of a child or child sexual exploitation”, it means that it investigated ~1000 possible cases of child prostitution, not that it found 1000 child sex slaves.
People believe a lot of crazy things, but I am genuinely flabbergasted at how many people find it plausible that there’s an entire underworld industry of kidnapping children and selling them to pedophiles in first world countries. I know why the anti sex trafficking orgs sell these stories—they’re trying to attract donations, and who is going to call out an “anti sex trafficking” charity? But surely most people realize that it would be very hard for an organized child rape cabal to spread word about their offerings to customers without someone alerting police.
They do the same thing with “child pornography”: that’s mostly teenagers sexting. And a girl was convicted for it too, charged with distributing child pornography of herself: link.
The other day I was trying to think of information leaks that a competent conspiracy couldn’t prevent, regarding this. I just thought of one small one: people will sometimes randomly die or have their homes raided. If the slavery is common, then sometimes the slaves will be discovered during these events. Even if the escapees wanted to silence the story out of shame, cops would probably gossip to the press.
So you can probably tally such events, crunch the numbers, and get a decent conspiracy-resistant estimate.
Haven’t looked too closely at this, but since there are some upvotes, wanted to comment with my initial two thoughts:
child consent is tricky.
likely many are foreign children, which may or may not be in the 75 million statistic
It is good to think critically, but I think it would be beneficial to present more evidence before making the claim or conclusion
Guys what’s up the mercator map projection on the homepage? I thought we were nerds?
This also annoyed me after first noticing how neat it is that I can see my house on the map.
should be
thanks
Now is the time to write to your congressman and (may allah forgive me for uttering this term) “signal boost” about actually effective AI regulation strategies—retroactive funding for hitting interpretability milestones, good liability rules surrounding accidents, funding for long term safety research. Use whatever contacts you have, this week. Congress is writing these rules now and we may not have another chance to affect them.
Where is the “How much do you agree with this, separate from whether you think it’s a good comment?” button when you actually need it?
Noticed something recently. As an alien, you could read pretty much everything Wikipedia has on celebrities, both on individual people and the general articles about celebrity as a concept… And never learn that celebrities tend to be extraordinarily attractive. I’m not talking about an accurate or even attempted explanation for the tendency, I’m talking about the existence of the tendency at all. I’ve tried to find something on wikipedia that states it, but that information just doesn’t exist (except, of course, implicitly through photographs).
It’s quite odd, and I’m sure it’s not alone. “Celebrities are attractive” is one obvious piece of some broader set of truisms that seem to be completely missing from the world’s most complete database of factual information.
Analyzing or talking about status factors is low-status. You do see information about awards for beauty, much like you can see some information about fiances, but not much about their expenditures or lifestyle.
Part of the issue is like that celebrity, as wikipedia approaches the word, is broader than just modern TV, film, etc. celebrity and instead includes a wide variety of people who are not likely to be exceptionally attractive but are well known in some other way. There’s individual preferences in terms of who they think are attractive, but many politicians, authors, radio personalities, famous scientists, etc. are not conventionally attractive in the way movie stars are attractive and yet these people are still celebrities in a broad sense. However, I’ve not dug into the depths of wikipedia to see if, for example, this gap you see holds up if looking at pages that more directly talk about the qualities of film stars, for example.
I think there’s also a “it’s obvious to everyone, so archaeologists of the future won’t find any mention of it because no one has had to explain it to anyone” factor. (I heard that archaeologists and historians know much less about everyday life than about significant events, although the former was obviously encountered much more often)
The olympics are really cool. I appreciate that they exist. There’s some timelines out there where they don’t have an Olympics and nobody notices anything is wrong.
Let me put in my 2c now that the collapse of FTX is going to be mostly irrelevant to effective altruism except inasmuch as EA and longtermist foundations no longer have a bunch of incoming money from Sam Bankman Fried. People are going on and on about the “PR damage” to EA by association because a large donor turned out to be a fraud, but are failing to actually predict what the concrete consequences of such a “PR loss” are going to be. Seems to me like they’re making the typical fallacy of overestimating general public perception[1]’s relevance to an insular ingroup’s ability to accomplish goals, as well as the public’s attention span in the first place.
As measured by what little Rationalists read from members of the public while glued to Twitter for four hours each day.
Falling birthrates is the climate change of the right:
Vaguely tribally valenced for no really good reason
Predicted outcomes range from “total economic collapse, failed states” to “slightly lower GDP growth”
People use it as an excuse to push radical social and political changes when the real solutions are probably a lot simpler if you’re even slightly creative
I wonder what is the optimal population size, because it seems to me that most people say either “more” or “less” (and yes, it seems strongly correlated with the political tribe), but no one ever gives an exact number. I suspect there is no optional number; that the people who say “more” or “less” will keep saying that regardless.
Too bad that more nuanced views, such as “let’s have more good and competent people, but fewer evil and incompetent people” are definitely outside the Overton window. :D
I mean, most moral theories do either give the answers of “zero”, “as large as can be fed”, or “a bit less than as large as can be fed”. Given the potential to scale feeding in the future, the latter two round off to “infinity”.
Most justice systems seem to punish theft on a log scale. I’m not big on capital punishment, but it is actually bizarre that you can misplace a billion dollars of client funds and escape the reaper in a state where that’s done fairly regularly. The law seems to be saying: “don’t steal, but if you do, think bigger.”
Yeah. It’s really weird.
And relatedly, I’m not sure about capital punishment, but it seems obvious to at least attempt to make fines proportionate to net worth or something. Ie. Bill Gates shouldn’t get the same sized speeding ticket as John Doe on welfare.
This feels like it’d be political policy that is low hanging fruit. I suspect that it isn’t because of EMH reasons, but I don’t understand the reasons why it isn’t.
I don’t agree with the take about net worth. The fine should just be whatever makes the state ambivalent about the externalities of speeding. If Bill Gates wants to pay enormous taxes to speed aggressively then that would work too.
Hm, I hadn’t thought about it that way. I was just thinking that the goal of the fine is some combination of 1) punitive and 2) deterrent, and neither of those goals are accomplished if you fine Bill Gates $200. But yeah, I guess if you make the fine large enough such that the state is ambivalent, maybe it all works out.
Theft of any amount over a hundred or so dollars is evil and needs to be punished. Let’s say you punish theft of $100 by a weekend in jail. Extrapolate that on a linear scale and you’ll have criminals who non-violently stole $20,000 doing more than double the jail time that a criminal who cold-cocked a stranger and broke his jaw would get. Doesn’t really make sense.
It strikes me that I’m not sure whether I’d prefer to lose $20,000 or have my jaw broken. I’m pretty sure I’d prefer to have my jaw broken than to lose $200,000, though. So, especially in the case that the money cannot actually be extracted back from the thief, I would tend to think the $200,000 theft should be punished more harshly than the jaw-breaking. And, sure, you’ve said that the $20,000 would be punished more harshly than the jaw-breaker, but that’s plausibly just because 2 days is too long for a $100 theft to begin with.
With billion dollars you can probably hire better lawyers.
Do other crimes, for example murder, follow a similar pattern? Like, at some moment they might execute you, but what are they going to do if you kill 10 times more people?
Can they cancel you more if you post 10 times more offensive tweets?
Maybe everything is (sub-)logarithmic, because that’s how people think.
In which case, a group of rationalist criminals should precommit that if they get caught, they will randomly choose one of them, who will accept the blame for everything.
This isn’t the source of the trend; the sentencing guidelines for fraud are actually literally, explicitly logarithmic. The government recommends directly that sentences follow a curve of 2x price --> 2 more years.
There seems to be a MAX_PUNISHMENT in the justice system (we don’t devolve into torture, etc.), which is reasonable. But with things like armed robbery you would get convicted for each individual count, not on a log scale.
This is (I suspect) a very common strategy among even regular criminals. You can think of it like a trade between law enforcement and gangs; the government gets their clearances and avoids the potential embarassment of a partially-solved case, and the serial killers send only the John Wayne Gacy to jail.
LessWrong as a website has gotten much more buggy for me lately. 6 months ago it worked like clockwork, but recently I’m noticing that refreshes on my profile page take something like 18 seconds to complete, or even 504 (!). I’m trying to edit my old “pessimistic alignment” post now and the interface is just not letting me; the site just freezes for a while and then refuses to put the content in the text box for me to edit.
Marvelous. I didn’t talk about this because I thought that the problem was not on the side of LessWrong, since in my country a lot of things have been slowing down, blocking, denying access, and so on, and at least from three sides at the same time: the state / providers, others countries / companies and those who do not want problems.
In order to synchronize against the illusion of transparency, I will write specific errors that I myself see: bad gateway (seems to be somehow related to following links within the site and back); “Error: NotFoundError: Failed to execute removeChild on Node: The node to be removed is not a child of this node.” (red, replaces the entire page, sometimes appears when you click “submit”); long page loading at the beginning; long loading of the remaining page after the update of the profile karma indicator and new messages has loaded; when double-clicking (on the phone), the voice is not amplified, but reset.
The performance problems have also been annoying me, though I don’t think it’s been 6 months since they’ve gotten worse (I think it’s been more like 4 weeks based on my read of the logs, which have sadly overlapped with some time period where it’s been hard for me or others to focus on fixing them). I’ve really hated it, and if I didn’t have COVID right now, would probably be trying to fix them right now.
Not sure what’s up about the editor. I don’t think I’ve experienced many additional problems here, though we have been rolling out a new editor, so new bugs aren’t that surprising. A bug report via Intercom would be greatly appreciated.
Sounds very likely upon reflection that I could be misremembering them to that far out; I just picked a date upon which the site definitively worked fast.
Have you reported this to the staff?
It doesn’t feel like I’m getting smarter. It feels like everybody else is getting dumber. I feel as smart as I was when I was 14.
Keep a diary. Human memory is unreliable and often fills in the gaps by the best guess—if you believe something now, your memory will try to convince you that you have always believed it (unless you have dramatic evidence to the contrary), because it is easier than tracking your beliefs over time. (Also it protects you from the emotional pain of knowing that you have changed your mind.) A diary may show you that the past was not as you remember it.
At least this is how it works for me. I have no dramatic conversions in my past; my opinions have changed fluently. So absent hard evidence it is easy for me to imagine that a 13 years old me (the earliest I can remember having actual opinions on things) was basically just like me today, only in a younger body, minus all the experience and professional skills. But when I found my old diary, I screamed in horror and quickly destroyed the evidence.
The parts of my personality that stay mostly unchanged for a long time are values and preferences. As far as I remember, I was interested in math and later in computers, I was interested in truth and helping others. (I have some evidence for that, such as doing math olympiad, or getting in trouble because I asked too much.) My beliefs, however… let’s just say that before 30 I was quite stupid. Yeah, I didn’t feel that way. Most stupid people don’t.
Before 30, I was also a moron. But I only know this because I had an ideological epiphany after that and my belief system changed abruptly. Scales-fell-from-my-eyes type situation. When I turned 33, I started keeping a diary because I noticed I have a terrible memory for even fairly recent things, so maybe going forward subtle changes will become more salient.
That said, some things seem more impervious to change. For instance the “shape” of things that give you pleasure. Maybe you liked 3d puzzles as a child and now you like playing in Blender in your free time. Not the same thing, but the same shape.
I’d say what changed for me was my model of the world, other people, myself, and a corresponding change of priorities.
Being an (officially undiagnosed) asperger, I basically had no idea how other people think and behave. I did things that felt natural to me, and people reacted, often illogically. Didn’t realize that most people lie most of the time, and even when I started suspecting that, I wasn’t able to figure out the truth.
But it wasn’t merely my personal stupidity. It also feels like I was culturally discouraged from figuring out the truth. Thinking not-nice things about other people seems frowned upon; that is what villains typically do, and they are always proven wrong at the end of the story. Then again, it’s the autism spectrum that makes you believe the narrative more than the things you actually see. The hypothesis that was taboo to consider despite being a good first approximation, was: “what if most people are actually selfish and kinda stupid, and they lie whenever is convenient, including to themselves, and most of them worry a lot about how others perceive them?” And suddenly, so many things started making sense.
The important thing is that not all people are like this, so you need to tell them apart (but judging people is another cultural taboo), and keep the smart and good ones around you, because (again as a first approximation) people don’t change. To do this successfully, you need to stop confusing “smart” with “acts like a stereotypical Mensa member” and “suffers from big ego and the Dunning–Kruger effect”. Smartness is more about flexible thinking, and often results in the person being good at what they do, even if it is not a stereotypical intellectual task. Also, someone who is nice and doesn’t do anything obviously stupid, probably is quite smart (because most people do stupid things) and “being nice” is the thing they are good at. -- I wish I knew all this when I was at high school and university, surrounded by many people I could choose from.
EDIT: After thinking about it more, this is ultimately a problem of signaling. As a null hypothesis, I guess everyone does the typical mind fallacy. Good people assume that most people are good, bad people assume that most people are bad, et cetera. Now the problem is that to achieve a more realistic perspective, good people need to update towards most people being actually not that good… but the people most likely to give you this update are the ones you do not want to associate with. Basically, “a bad person who assumes that everyone else is bad and that only hypocrites say otherwise” sounds quite similar to “a good person, who originally assumed that everyone else was good, then got burned, and now wants to share the costly lesson with other good people”. (If you listen to them for a longer time, you will notice the difference, because the bad person will conclude “and therefore, it is only fair for us to also hurt others”, while the good person will conclude “and I still keep trying to help others, but I no longer expect that they will reciprocate”.)
Another big update was related to a career. Yes, working hard is important; that’s how you level up. But this will not translate to rewards automatically; you need to negotiate, sometimes you need to leave for a place that values you more. You also need to be strategic about which skills to level up; some things that your employer wants you to learn (obsolete technologies that they still use, internally developed systems) will be useless when you change jobs. The relation between how difficult the work is, how stressful the work environment is, and how much they pay you is mostly random; do not hesitate to leave an unpleasant place thinking “if I can barely handle this, I am not good enough for a better paid place”; chances are that your next job will be easier and will pay more (at least because the salaries are now increased by inflation).
I seriously doubt on priors that Boeing corporate is murdering employees.
Most companies don’t threaten their employees with physical violence. According to another Boeing whistleblower Sam Salehpour, that seems to happen at Boeing.
Being a defense contractor, I would expect Boeing corporate to have better relationships with the kind of people you would hire for such a task than corporations.
I mean, sure, but I’ve been updating in that direction a weirdly large amount.
This sort of begs the question of why we don’t observe other companies assassinating whistleblowers.
Robin Hanson has apparently asked the same thing. It seems like such a bizarre question to me:
Most people do not have the constitution or agency for criminal murder
Most companies do not have secrets large enough that assassinations would reduce the size of their problems on expectation
Most people who work at large companies don’t really give a shit if that company gets fined or into legal trouble, and so they don’t have the motivation to personally risk anything organizing murders to prevent lawsuits
I think my model of people is that people are very much changed by the affordances that society gives them and the pressures they are under. In contrast with this statement, a lot of hunter-gatherer people had to be able to fight to the death, so I don’t buy that it’s entirely about the human constitution. I think if it was a known thing that you could hire an assassin on an employee and unless you messed up and left quite explicit evidence connecting you, you’d get away with it, then there’d be enough pressures to cause people in-extremis to do it a few times per year even in just high-stakes business settings. Also my impression is that business or political assassinations exist to this day in many countries; a little searching suggests Russia, Mexico, Venezuela, possibly Nigeria, and more.
I generally put a lot more importance on tracking which norms are actually being endorsed and enforced by the group / society as opposed to primarily counting on individual ethical reasoning or individual ethical consciences.
(TBC I also am not currently buying that this is an assassination in the US, but I didn’t find this reasoning compelling.)
Oh definitely. In Mexico in particular business pairs up with organized crime all of the time to strong-arm competitors. But this happens when there’s an “organized crime” tycoons can cheaply (in terms of risk) pair up with. Also, OP asked about why companies don’t assassinate whistlebowers all the time specifically.
That was not criminal murder by the standards of the time. Arguably a lot of gang murders committed in the United States are committed by people not capable or willing to go out and murder people on their own.
In worlds where status is doled out based on something objective, like athletic performance or money, there may be lots of bad equilibria & doping, and life may be unfair, but at the end of the day competitors will receive the slack to do unconventional things and be incentivized to think rationally about the game and their place in it.
In worlds where status is doled out based on popularity or style, like politics or Twitter, the ideal strategy will always be to mentally bully yourself into becoming an inhuman goblin-sociopath, and keep hardcoded blind spots. Naively pretending to be the goblin in the hopes of keeping the rest of your epistemics intact is dangerous in these arenas; others will prod your presentation and try to reveal the human underneath. The lionized celebrities will be those that embody the mask to some extent, completely shaving off the edges of their personality and thinking and feeling entirely in whatever brand of riddlespeak goes for truth inside their subculture.
There’s truth in what you’re saying. At the same time, I feel like people have an instinctive desire for clarity over riddlespeak. I think it’s the same instinct that makes people favor 4k televisions over standard definition. I think it’s possible to make a twitter-like medium that discourages hardcoded blind spots.
A surprisingly large amount of people seem to apply statuslike reasoning toward inanimate goods. To many, if someone sells a coin or an NFT for a very high price, this is not merely curious or misguided: it’s outright infuriating. They react as if others are making a tremendous social faux pas—and even worse, that society is validating their missteps.
Stop using twitter.
I did, 3-6 months ago.
I did, two years ago.
I don’t use twitter very much, mostly reading links and threads someone points to from some other medium. I pretty much never publicly tweet. I presume I’m not your target for this advice, but for clarity are you worried about consumption (wasting time, developing biased views) or production (producing bias or over-simple models)?
Most importantly, do you have a “do more of X” to augment your “do less/none of Y (Y: twitter)”?
ppl really out here dropping alignment proposals like SCP-001 entries
Sometimes people say “before we colonize Mars, we have to be able to colonize Antarctica first”.
What are the actual obstacles to doing that? Is there any future tech somewhere down the tree that could fix its climate, etc.?
The Antarctic Treaty (and subsequent treaties) forbid colonization. They also forbid extraction of useful resources from Antarctica, thereby eliminating one of the main motivations for colonization. They further forbid any profitable capitalist activity on the continent. So you can’t even do activities that would tend toward permanent settlement, like surveying to find mining opportunities, or opening a tourist hotel. Basically, the treaty system is set up so that not only can’t you colonize, but you can’t even get close to colonizing.
Northern Greenland is inhabited, and it’s at a similar latitude.
(Begin semi-joke paragraph) I think the US should pull out of the treaty, and then announce that Antarctica is now part of the US, all countries are welcome to continue their purely scientific activity provided they get a visa, and announce the continent is now open to productive activity. What’s the point of having the world’s most powerful navy if you can’t do a fait accompli once in a while? Trump would love it, since it’s simultaneously unprecedented, arrogant and profitable. Biggest real estate development deal ever! It’s huuuge!
We arguably have already colonized Antarctica. See Wikipedia.
A similar point would be: There is no permanent deep sea settlement (an underwater habitat), although this would be much easier to achieve than a settlement on Mars.
Surely you cannot change the climate of Antarctica without changing the climate of Earth as a whole.
In principle I suppose one could build very large walls around it to reduce heat exchange with the rest of Earth and a statite mirror (or few slowly orbiting ones) to warm it up. That would change the southern hemisphere circulation patterns somewhat, but could be arranged to not affect the overall heat balance of the rest of Earth.
This is very unlikely to happen for any number of good reasons.
I thought we had a bunch of treaties which prevented that from happening?
A man may climb the ladder all the way to the top, only to realize he’s on the wrong building.
What percent of everett branches is Trump dead since yesterday morning?
I don’t know, but I expect the fraction is high enough to constitute significant empirical evidence towards the Will quantum randomness affect the 2028 election? question (since quantum randomness affects the weather, the wind speed affects bullet trajectories, and the whether or not one of the candidates in the 2024 election was assassinated seems pretty influential on the 2028 election).
Not sure, but it seems to me that in the vast majority of Everett branches in which shots were fired at Trump, either they all missed or at least one of them scored a hit solid enough to kill or seriously injure Trump. The outcome that happened in our branch (graze his cheek & ear) is pretty unlikely. I don’t think there are any implications of this, it’s just interesting.
Is the “percent of everett branches” a literal question, or just a clever way of saying “prior probability at the moment of gunfire”? Taken literally, there’s an infinitesimal fraction of branches that contain humans, a tiny fraction of those contain Trump, a trivial slice of THOSE have him in the public eye enough to get shot at, only a few of which have that event and that shooter present, etc...
It’s a lot like saying “what’re the chances that this week’s lottery would be EXACTLY 11,23,44,46, 51, 60”? It depends on when you ask the question, and what your reference set is. The reference set of everett branches is near-infinite (I haven’t seen a formal treatment arguing that it’s truly infinite, nor what kind of infinity), so any given set of similar-in-some-ways branches is infinitesimal. At a human probability level, the chances that Trump died are now 0 (or at least near-zero; you can never be truly certain).
Chances of being injured in head but not brain damaged are rather small, I think less than 10 per cent. So in 90 per cent of branches where shots were fired in his head directions, he is seriously injured or dead.
However, climbing to roof without Secret Service reaction was also a very unlikely event. May be only 10 per cent chance of success.
Combining, I get 9 per cent of him being dead or seriously injured yesterday.
Nate Silver alludes to this question too.
I regret to tell you that most of the time intelligence officers just do what they’re told.
Yes, if you have an illegal spying program running for ten years with thousands of employees moving in and out, that will run a low-grade YoY chance of being publicized. Management will know about that low-grade chance and act accordingly. But most of the time you as a civilian just never hear about what it is that intel agencies are doing, at least not for the first fifty or so years.
I also regret to inform you that it is much easier for state departments to cover up a managerial decision not to act on information, and later pretend it was a “mistake” or an “interbranch communication issue”, than to get away with active measures. Only conspiracy theorists decided that Bush not acting on the “Bin Laden determined to strike in U.S.” presidential brief, that one that mentioned that Bin Laden was preparing to hijack planes a month before 9/11, was particularly suspicious. In that case, an unsuspicious reaction was probably reasonable, but applying that standard of evidence universally means there is practically no amount of information about the state of Mossad that would “prove” to mainstream media or Wikipedia editors that they either elected to ignore Hamas’s incoming attack entirely or muttered “heads I win, tails I win”.
As for whether or a country would be “willing” to do this particular thing, sacrifice a few thousand civilians to provide a casus belli for an annexation or to shore up support from abroad… Well, it’s not par for the course, at least in developed democracies, but, Many Such Cases, as the saying goes. My moral self is appalled by the lack of respect for deontological guardrails, but I will admit that this highlights the violent hatred of Israel’s enemies in a way that I think dismantling the plot would be unable to. How else are they supposed to justify annexing the Gaza strip and incidentally expelling much of the native population without it being a reaction to clear crimes against humanity? Where else is Netanyahu supposed to get his poll bump from?
It’s worth keeping the actions of Mossad and those of Netanyahu are different. Pentagon leaks suggest that senior Mossad leadership was supporting protests against Netanyahu’s policies. Former Mossad leaders also spoke out.
That they are action to give Netanyahu a poll bump and can do that without internal leaks that undermine the project seems to me unlikely.
Imagine, that the CIA would have warned Trump of a terror attack and Trump didn’t act. Do you think that would be kept secret in the same way that Bush administration inaction would be kept secret?
It is hard for me to tell whether or not my not-using-GPT4 as a programmer is because I’m some kind of boomer, or because it’s actually not that useful outside of filling Google’s gaps.
Why not both? For me, age and curmudgeonliness makes me reject it for not being “enough better”. I’m not sure what my standards are, but I recognize that what I’ve tried so far isn’t perfect, but is probably somewhat faster than search and modify. Just not ENOUGH to get me to invest the time in getting as good at it as I am at more traditional semi-plagiarism.
You can test the latter hypothesis by trying it (more) :)
I’ve tried it. Here are some examples. I didn’t save the original prompts and answers.
Use the elliptic functions provided by Matlab to calculate the length of an elliptic arc. (I already knew that there is a closed-form solution to this in terms of elliptic functions. Everyone writing an introduction to elliptic functions mentions this, but I have never seen anyone give an actual formula. The task is complicated by the existence of multiple conventions for defining these functions.)
It gave some Matlab code using elliptic functions, but it was simply wrong. With some effort of my own, I eventually worked out a correct formula, and verified that it agreed with numerical integration.
Devise a function that maps [0,1] onto [1,∞], is strictly increasing and differentiable, and which takes a parameter specifying how late and sharp its divergence to ∞ should be. Then program it in Matlab.
It produced a function that missed several of the requested properties. It also programmed it in Matlab, but given that the function was wrong, I didn’t bother to see if the programming was right.
I chose this problem because I’d recently needed to do that myself. It took no longer to do it right myself than to ask the LLM and determine whether it got it right — which it didn’t, so that would have been wasted time.
I asked it how to modify an iOS app to respond to the user’s dark/light mode setting.
The answer it gave me I could have looked up on Google just as quickly, and Google’s answer had the advantage of going directly to Apple’s documentation and a WWDC presentation, sources of ground truth rather than the ungrounded vagaries of a chatbot which were not even worth reading.
Score: 0⁄3. This is typical of the results I see from LLMs on every task they are applied to, whether mine or other people’s. When I have a question, I want an answer that rings like a bell, not an LLM’s leaden clunk.
If it did actually turn out that aliens had visited Earth, I’d be pretty willing to completely scrap the entire Yudkowskian implied-model-of-intelligent-species-development and heavily reevaluate my concerns around AI safety.
A lot of cosmological-type facts have this effect. That’s why people are so occupied by them.
If that turned out to be the case, my preliminary conclusion would be that the hard physical limits of technology are much lower than I’d previously believed.
You don’t hear much about the economic calculation problem anymore, because “we lack a big computer for performing economic calculations” was always an extremely absurd reason to dislike communism. The real problem with central planning is that most of the time the central planner is a dictator who has no incentive to run anything well in the first place, and gets selected by ruthlessness from a pool of existing apparatchiks, and gets paranoid about stability and goes on political purges.
What are some other, modern, “autistic” explanations for social dysfunction? Cases where there’s an abstract economic or sociological argument about why certain policy/command structures are bad, which are mostly rationalizations designed to fit obviously correct conclusions into an existing field that wouldn’t accept them in their normal format?
I agree with your characterization of the problem with central planning, and that we don’t hear much about the economic calculation problem anymore, but… “we lack a big computer for performing economic calculations” was not an absurd reason to dislike communism, it was literally true.
All digital computers ever possessed by or within the Soviet Union had, in total, less FLOP/s than a single A100 GPU; it’s harder to get numbers for memory but the ratio is pretty stable over time. Their techniques were also enormously less efficient than modern optimization software (MILP, SMT, etc. etc.); in benchmarks this is a bigger deal than hardware progress. Amazon routinely solves planning problems which were fundamentally intractable for any 20th century government, and has enormously more data with which to solve them.
That said, I think the real real problem with central planning is that it’s… central. The price mechanism plus decentralized decisionmaking turns out to be a fantastic combination for eliciting (and arguably developing) preferences, and once you get past problems like “almost everyone is starving because our economy was based on subsistence agriculture and then wrecked by invasion” that can be solved by “grow grain, make steel, pour concrete” you’d still be screwed even if your socialist central planners were implausibly competent and benevolent. You can get around that somewhat by allowing markets (post-Deng China), or elicit preference with ‘shadow prices’ (a regular cause of purges among Soviet economists), but in practice you keep running into the problems caused the ways that dictators take and keep power.
We have large centralized companies. For better or worse those companies don’t use big computers to make economic calculations that output the company decisions at the top level.
Our political system also doesn’t use big computer models to decide on economic policy. Before we had the computational capacity we might have thought that we will do that once we have it, but it turns out we don’t.
Computer hacking is not a particularly significant medium of communication between prominent AI research labs, nonprofits, or academic researchers. Much more often than leaked trade secrets, ML people will just use insights found in this online repository called arxiv, where many of them openly and intentionally publish their findings. Nor (as far as I am aware) are stolen trade secrets a significant source of foundational insights for researchers making capabilities gains, local to their institution or otherwise.
I don’t see this changing on its own, regardless of how “close” we are to developing AGI. So for now, increasing information security standards across the field seems to me like a waste of time, particularly when talking about alignment labs that (hopefully) pioneer a fraction of a fraction of relevant capabilities research. It’s hard to me to imagine a timeline in which MIRI is safeguarding a big red button from China that’s not also an Ultra Fucked timeline, without the above facts also changing.
An evil part of me would really love for cybersecurity to be very relevant to AI alignment, because it’s super interesting and also my field, but (fortunately?) I really don’t understand the people who claim that it is. I could be missing something very critical though.
Do we have a good idea of how prominent AI research labs compare to the resources that go into Five Eyes AI models for intelligence analysis and for Chinese government pursuits?
I’ve forgotten at this point who they are, but I will ask some of my friends later to give me some of the public URLs of the “big players” working in this space so you can partly see for yourself. Their marketing is really impressive because government contractors, but I encourage you to actually look at the product on a technical level.
Largely: the NSA and its military-industrial partners don’t come up with new innovations, except as applies to handling the massive amounts of data they have and their interesting information security requirements. They just apply technologies and insights from companies like OpenAI or DeepMind. They’re certainly using things like large language models to scan your emails now, but that’s because OpenAI did the hard work already.
More importantly, when they do come up with innovations, they don’t publish them on the internet, so they don’t burn much of the “commons”, as it were.
I can’t give much insight on china, sadly.
There was a large amount of time when the NSA did come up with cryptography-related math innovations in secret and did not share that information publically.
The NSA does see itself as the leading employer of mathematicians in the United States. To the extent that those employees come up with groundbreaking insights, those are likely classified and you won’t find them in the marketing materials of government contractors.
Every once in a while I’m getting bad gateway errors on Lesswrong. Thought I should mention it for the devs.
Today, I also got errors.
It is unnecessary to postulate that CEOs and governments will be “overthrown” by rogue AI. Board members in the future will insist that their company appoint an AI to run the company because they think they’ll get better returns that way. Congressmen will use them to manage their campaigns and draft their laws. Heads of state will use them to manage their militaries and police agencies. If someone objects that their AI is really unreliable or doesn’t look like it shares their values, someone else on the board will say “But $NFGM is doing the same thing; we obviously need to stay competitive with them” and that will be the end of the debate. Deep technical safety concerns about mesaoptimizers will not even be brought up during the meeting. AIs will just slowly capture all of our institutions and begin to write and enforce our laws because we design and build them for that purpose. We are actually that stupid.
You should not say that this is the only concern; in fact you should explicitly state that it’s not the only one. But you should mention this first, because it’s way more understandable to lots of people than the idea that superintelligent machines will have hard power and manage to overturn the federal government directly, for some reason.
Currently reading The Rise and Fall of the Third Reich for the first time. I’ve wanted to read a book about Nazi Germany for a while now, and tried more “modern” and “updated” books, but IMO they are still pretty inferior to this one. The recent books from historians I looked at were concerned more with an ideological opposition to Great Men theories than factual accuracy, and also simply failed to hold my attention. Newer books are also necessarily written by someone who wasn’t there, and by someone who does not feel comfortable commenting about events from a first person perspective as such.
This book, though, is riveting, and I have to avoid the impulse of looking things up on Wikipedia as I read their names to keep the narrative. There are so many details about the buildup to WW2 here that I was just unaware of going in. I think everybody knows of Chamberlain as the Hitler appeasement guy, and Wikipedia tells you he maybe gets a bad rap in retrospect, but the full extent of his treachery and spinelessness is sickening. I am creating a “highlights from” post based on the book and encourage you guys to read it yourselves if you want to learn more about authoritarianism.
Looking forward to your post!
I have been working on a detailed post for about a month and a half now about how computer security is going to get catastrophically worse as we get the next 3-10 years of AI advancements, and unfortunately reality is moving faster than I can finish it:
https://krebsonsecurity.com/2022/10/glut-of-fake-linkedin-profiles-pits-hr-against-the-bots/
I understand, though I’d still like to see that post, especially as it relates to some of the more advanced attacks. Unfortunately yeah it’s already happening, though not much has come of it so far.
I have always understood that the CIA, and the U.S. intelligence community more broadly, is incompetent (not just misaligned—incompetent, don’t believe the people on here who tell you otherwise), but this piece from Reuters has shocked me:
Good rationalists have an absurd advantage over the field in altruism, and only a marginal advantage in highly optimized status arenas like tech startups. The human brain is already designed to be effective when it comes to status competitions, and systematically ineffective when it comes to helping other people.
So it’s much more of a tragedy for the competent rationalist to choose to spend most of their time competing in those things than to shoot a shot at a wacky idea you have for helping others. You might reasonably expect to be better at it than 99% of the people who (respectably!) attempt to do so. Consider not burning that advantage!
I don’t think I agree with the premise, but it’s a really weird comparison. “advantage over the field” is kind of meaningless for altruism, where the goal really should be cooperation with the field in improvements for (subsets of) people. Tech startups ALSO benefit from this attitude, in that you’re trying to align your company to provide more utility to customers, though it also includes more explicit competition among companies and individuals.
Tech startups (and lucrative employment in non-startups) ARE a much bigger arena, so the competitive parts have much stronger competition. I guess to that extent, I agree—altruism is easier, if you care about relative rank rather than absolute results. I don’t know the altruism world enough to know how much status competition there is, but the local food and employment charities I’ve been involved with don’t seem immune at all.
In hindsight, it is literally “based theorem”. It’s a theorem about exactly how much to be based.
What theorem?
Bayes’ Theorem, presumably.
lol
Especially to non-native speakers, it’s not at all obvious that Bayes’ Theorem and Based Theorem sound almost the same since d, which reads like t, merges with th.
I think we should reward admitting-of-ignorance, or at the very least not punish it.
In the same way that Chinese people forgot how to write characters by hand, I think most programmers will forget how to write code without LLM editors or plugins pretty soon.
Once the usage of AI editors becomes mainstream, the programming languages themselves may start evolving in a direction of no longer being legible for an unaided human, because why not. Complaining about not being able to understand the source code will sound similar to complaining about not being able to read the binary code today. Like “yeah, but you are not supposed to do that, that’s what the algorithm is for”.
They may, but I think the AI code generators would have to be quite good. As long as the LLMs are merely complementing programming languages, I expect them to remain human-readable & writable; only once they are replacing existing programming languages do I expect serious inscrutability. Programming language development can be surprisingly antiquated and old-fashioned: there are many ways to design a language or encode it where it could be infeasible to ‘write’ it without a specialized program, and yet, in practice, pretty much every language you’ll use which is not a domain-specific (usually proprietary) tool will let you write source code in a plain text editor like Notepad or nano.
The use of syntax highlighting goes back to at least the ALGOL report, and yet, something like 50 years later, there are not many languages which can’t be read without syntax highlighting. In fact, there’s very few which can’t be programming just fine with solely ASCII characters in an 80-col teletype terminal, still. (APL famously failed to ever break out of a niche and all spiritual successors have generally found it wiser to at least provide a ‘plain text’ encoding; Fortress likewise never became more than a R&D project.) Like this website—HTML, CSS, JS, maybe some languages which compile to JS, SVG… all writable in a 1970s Unix minicomputer printing out to physical paper.
Or consider IDEs which operate at ‘project’ level or have ‘tags’ or otherwise parse the code in order to allow lookups of names, like methods on an object—you could imagine programming languages where these are not able to be written out normally because they are actually opaque UUIDs/blobs/capabilities, and you use a structural editor (similar to spreadsheets) to modify everything, instead of typing out names letter by letter like a barbarian. (And ‘visual’ programming languages often do such a thing.) The Smalltalk systems where you did everything by iteratively interacting with GUI objects come to mind as systems where it’s not even clear what the ‘plain text’ version is, after you’ve used the systems dynamically as they were intended to be used, and rewritten enough objects or overridden enough methods… But again, few languages in widespread use will do that.
what do you think of replit agent, stack blitz, etc?
The extensive effort they make to integrate into legacy systems & languages shows how important that is.
There’s a particular AI enabled cybersecurity attack vector that I expect is going to cause a lot of problems in the next year or two. Like, every large organization is gonna get hacked in the same way. But I don’t know the solution to the problem, and I fear giving particulars on how it would work at a particular FAANG would just make the issue worse.
I don’t understand why you wouldn’t just follow normal responsible disclosure practices here, e.g. just disclose this to Google and then leave it to them.
Google’s red team already knows. They have known about the problem for at least six months and abused the issue successfully in engagements to get very significant access. They’re just not really sure what to do because the only solutions they can come up with involve massively disruptive changes.
I know some pretty senior people in security for 2 FAANG companies, and passing acquaintance at others, and currently work in the Security org at a comparable company. All of them have reporting channels for specific threats, and none (that I know) are ignorant of the range of AI-enabled attacks that are likely in the near future (shockingly many already). The conversations I’ve had (regarding products or components I do know pretty well) have convinced me that everything I come up with is already on their radar (though some are of the form “Yeah, that’s gonna happen and it’s gonna suck. Current strategy is to watch for it and not talk much about it, in order not to encourage it”).
Without disclosing some details, there’s probably no way to determine whether your knowledge or theory is something they can update on. I’m happy to pass on any information, but I can’t see why you’d trust me more than more direct employees of the future victims.
The security team definitely know about the attack vector and I’ve spoken to them. It’s just that neither I nor they really know what the industry as a whole is going to do about it.
Sounds like the sort of thing I’d forward to Palisade research.
Gambling is just stealing money from your clones in foreign everett branches
For a significant fee, of course
Serial murder seems like an extremely laborious task. For every actual serial killer out there, there have to be at least a hundred people who would really like to be serial killers, but lack the gumption or agency and just resign themselves to playing video games.
Works with most crimes tbh.
I sometimes read someone on here who disagrees fiercely with Eliezer, or has some kind of beef with standard LessWrong/doomer ideology, and instinctively imagine than they’re different from the median LW user in other ways, like not being caricaturishly nerdy. But it turns out we’re all caricaturishly nerdy.
There is a kind of decadence that has seeped into first world countries ever since they stopped seriously fearing conventional war. I would not bring war back in order to end the decadence, but I do lament that governments lack an obvious existential problem of a similar caliber, that might coerce their leaders and their citizenry into taking foreign and domestic policy seriously, and keep them devolving into mindless populism and infighting.
To the extent that “The Cathedral” was ever a real thing, I think whatever social mechanisms that supported it have begun collapsing or at least retreating to a fallback line in very recent years. Just a feeling.
Conspiracy theory: sometime in the last twenty years the CIA developed actually effective polygraphs and the government has been using them to weed out spies at intelligence agencies. This is why there haven’t been any big American espionage cases in the past ten years or so.
Either post your NASDAQ 100 futures contracts or stop fronting near-term slow takeoff probabilities above ~10%.
If I was still a computer security engineer and had never found LessWrong, I’d probably be low key hyped about all of the new classes of prompt injection and social engineering bugs that ChatGPT plugins are going to spawn.
Injections don’t deal with the model itself, it would be just like any other input prompt security protocol. Heck, I surely hope ChatGPT doesn’t execute code with root permission.
If someone is using a GPTv4 plugin to read and respond to their email, then a prompt injection would mean being able to read other emails
I didn’t know you could do that. Truly dangerous times we live in. I’m serious. More dangerous because of the hype. Hype means more unqualified participation.
>be big unimpeachable tech ceo
>need to make some layoffs, but don’t want to have to kill morale, or for your employees to think you’re disloyal
>publish a manifesto on the internet exclaiming your corporation’s allegiance to right-libertarianism or something
>half of your payroll resigns voluntarily without any purging
>give half their pay to the other half of your workforce and make an extra 200MM that year
Seems like a lot of ways for that to go wrong. Especially if many of your customers (big advertisers) leave voluntarily as well.
Seems to have worked out for Kraken and Coinbase though.
For now. Even if not, there are others for whom it’s not worked out so well, and it’s unclear if these actions are causal to success.
Forcing your predictions, even if they rely on intuition, to land on nice round numbers so others don’t infer things about the significant digits is sacrificing accuracy for the appearance of intellectual modesty. If you’re around people who shouldn’t care about the latter, you should feel free to throw out numbers like 86.2% and just clarify that your confidence is way outside 0.1%, if that’s just the best available number for you to pick.
Every five years since I was 11 I’ve watched The Dark Knight thinking “maybe this time I’ll find out it wasn’t actually as good as I remember it being”. So far it’s only gotten better each time.
Hmm. Can’t upvote+disagree for shortform entries. I like hearing about others’ preferences and experiences in cultural and artistic realms, so thanks for that. I’m not sure I exactly disagree—the movie was very good, but not in my top-10 - I need to re-watch it, but previous re-watches have been within epsilon of my expectations—still good, but no better nor worse than before.
Can you identify the element(s) that you expect to age badly, or you think you overvalued before, and which surprised you by still being great? Or just the consistency of vision and feel through all the details?
Also, if you are even a little bit of a Batman or superhero connoisseur, I highly recommend Birdman (2014).
One of the very suprising ones is this sense of something cousined to “realism”. Specifically how much the city of Gotham could be seamlessly replaced with “Juarez” or “Sinaloa” and become an uncomfortably on-point tragedy about the never-ending war between honest men and organized bandits in those regions. The level of corruption and government ineffectiveness, the open coordination and power sharing between the criminals carving up the city, and the ubiquitous terrorism, are unrealistic for modern America and yet as a premise they are pretty much unassailable, because cities as bad as TDK::Gotham or worse exist around the world today.
Another is, I’m not ashamed to say it, the depth of the social commentary. You are setting yourself up to be the cringiest of cringe by saying that the Joker says something deep in a movie, at this point, but I honestly find the following quote between Harvey and him in the middle of the movie a little gut wrenching:
Also it’s just a really well done movie! It says a particular thing it wants to say, very well, and doesn’t really trip and fall over itself at any point in its runtime.
Made an opinionated “update” for the anti-kibitzer mode script; it works for current LessWrong with its agree/disagree votes and all that jazz, fixes some longstanding bugs that break the formatting of the site and allow you to see votes in certain places, and doesn’t indent usernames anymore. Install Tampermonkey and browse to this link if you’d like to use it.
Semi-related, I am instituting a Reign Of Terror policy for my poasts/shortform, which I will update my moderation policy with. The general goal of these policies is to reduce the amount of time I spend thinking and distressing about the same shit every social media platform makes you stress out about to the detriment of your mental health: which person is commenting on my posts, what upboats are they/I getting, Muh Status, etc. I respect that these are stringent enough requirements, that I have epsilon negotiating leverage, and that I will probably end up either banning or scaring off nine of the ten people that would have ever commented on or read the things I plan to write on LW. This is the only way I think I’ll be able to tolerate writing anything for smart people moving forward, so I’m going to do it even if nobody ever comments on my posts again.
The Reign of Terror policy says:
You must respect the wishes of those who are using the anti-kibitzer script not to know who you are. This means not saying stuff that people on here could reasonably use to infer your identity, even if “who you are” is something you don’t expect to make people take what you say better. The exception is if you’re drawing on absolutely fucking critical anecdotal evidence, like when disputing serious factual mistakes about you or someone you personally know. I will be extremely unforgiving about exceptions, which will be rare because of the rule below.
No discussions of specific individuals, if they could ever be reasonably expected to read anything you write, or if anybody else who personally knows them could ever be reasonably expected to read anything you write, perhaps by Googling their name and coming up on the post or by searching for themselves inside LW.
Examples of people whose names you can utter: Xi Jinping, Jeffrey Epstein. Examples of people whose names you can’t utter: Eliezer Yudkowsky, Sam Altman, the name of my religious college friend who has a sysadmin job now.
It’s of course sometimes necessary in rationality discussions to imply things about particular people, for example to respond to their ideas or the ideas of a specific group like “Christians”, but I will only allow this if you literally have no other way of effectively making some broader point. If it seems to me like you missed a more general way of making said point but you did your best I won’t permaban you and instead just ask you to modify your comment.
No implying positive status differences between you and either the median American computer programmer or another commenter, under any circumstances, regardless of how much it would contribute to the conversation for you to do so. If you have something you feel like you have to say entirely independent of the fact that it makes you look cool (and thus makes other people feel small), you must either refrain from saying it, or find a way to say it in a way that doesn’t do that. No asking people for information that would confirm/disconfirm such things either.
There is no separate policy or exception for people who got their status by founding a save-the-drowning-children corporation.
I will be sooooo retarded about this rule. If I don’t ban someone in the next six months for breaking this rule, I will go ahead ban the person with the highest log-odds of having broken this rule in a way I didn’t understand, just to precommit to the three people that have read this far that I mean business.
Rudeness is allowed, but only rudeness that makes you look dumb and the other person look good. Think, 4chan rudeness, when 4chan isn’t being totally spiteful or mean. If you take that kind of 4chan rudeness seriously and you start commenting on my posts you have forfeited any and all sympathy from me in particular and will just be pointed to Da Rules.
Please install the anti-kibitzer script. Not a rule, I couldn’t enforce it anyways, but strongly suggested.
This only applies to posts and shortforms I make from-now-on. I certainly haven’t followed it myself. You of course get one opportunity to follow the reign of terror policy and then I will click the ban forever button, because even if you’ve been a diligent sport throughout the last 800 posts, I won’t know who you are when you decide to mention in passing that you used to work at Google or MIRI or whatever.
I have no clue whether any of my previous comments on your posts will qualify me for perma-ban, but if so, please do so now, to save the trouble of future annoyance since I have no intention of changing anything. I am generally respectful, but I don’t expect to fully understand these rules, let alone follow them.
I have no authority over this, but I’d hope the mods choose not to frontpage anything that has a particularly odd and restrictive comment policy, or a surprisingly-large ban list.
I think it’s better to annoy commenters than to annoy post authors, so actually allowing serious Reign of Terror is better than meaningfully discouraging it. That’s the whole point of Reign of Terror, and as the name suggests it shouldn’t be guaranteed to be comfortable for its subjects.
One problem with how it’s currently used is authors placing Reign of Terror policy for their own comfort in a motte/bailey way, without any actual harsh moderation activity, inflating the category into the territory of expected comfort for the commenters. There should be weak incentive for authors to not do this if they don’t actually care.
For a lot of posts, the value is pretty evenly distributed among the post and the comments. For frontpage-worthy ones, it’s probably weighted more to posts, granted. I fully agree that “reign of terror” is not sufficient reason to keep something off frontpage.
I was reacting more to the very detailed rules that don’t (to me) match my intuitions of good commenting on LW, and the declaration of perma-bans with fairly small provocation. A lot will depend on implementation—how many comments lc allows, and how many commenters get banned.
Mostly, I really hope LW doesn’t become a publishing medium rather than a discussion space.
There’s practically no reason on a rationality forum for you to assert your identity or personal status over another commenter. I agree the rules I’ve given are very detailed. I don’t agree that any of the vast majority of valuable comments on LessWrong are somehow bannable by my standard.
The reason I’m stringent about doing this, is because the status asserting comments literally ruin it for everybody else, even when the majority of everybody else is not interested in such competitions. They make people like me, who are jealous and insecure, review everything they’ve ever written in the light that they might be judged. I don’t come here because I want to engage in yet another status tournament. I come here because I want to become a better thinker and learn new and interesting things about the world. I also come here because I like being able to presume that most of the other commenters are using the forum like I am. In this sense it’s worth it to me if this policy prevents one person from trying to social climb even if I have to prevent four other comments that wouldn’t otherwise be a problem.
As I said, obviously this is not a retroactively applying policy, I have not followed it until now, and I will not ban anybody for commenting differently on my posts. I’m not going to ban you pre-emptively or judge you harshly for not following all of my ridiculously complicated rules. Feel free to continue commenting on my posts as you please and just let me eventually ban you; that’s honestly fine by me and you should not feel bad about it.
I personally hope they would not refuse to frontpage my posts from now on for having a restrictive comment policy when it’s not obviously censoring criticism of the post itself, but I have already forfeited arbitrarily large amounts of exposure and the mods can do what they wish.
Would it be a good idea to get [OP] stickers on comments by the author of the post?
Based on Victoria Nuland’s recent senate testimony, I’m registering a 66% prediction that those U.S. administered biological weapons facilities in Ukraine actually do indeed exist, and are not Russian propaganda.
Of course I don’t think this is why they invaded, but the media is painting this as a crazy conspiracy theory, when they have very little reason to know either way.
Here’s an analysis by Dr. Robert Malone about the Ukraine biolabs, which I found enlightening:
https://rwmalonemd.substack.com/p/ukraine-biolab-watchtower?r=ta0o1&s=w&utm_campaign=post&utm_medium=web
I glean that “biolab” is actually an extremely vague term, and doesn’t specify the facility’s exact capabilities at all. They could very well have had an innocuous purpose, but Russia would’ve had to treat them as a potential threat to national security, in the same way that Russian or Chinese “biolabs” in Mexico might sound bad to the US, except Russia is even more paranoid.
It seems like “biological weapons facility” is a quite subjective term. The US position is that their own army labs that produced anthrax that was used after 9/11 are not a “biological weapons facility” because while they do produce anthrax that could be used militarily, it’s not produced with the intent of military use.
Based on those definitions it’s plausible that the Ukrainian labs produce viruses that can be weaponized but that the US just doesn’t see them as a “biological weapons facility” because they believe the intent for offensive use isn’t there.
Glenn Greenwalds reporting is good on this. https://rumble.com/vx2iq7-the-white-houses-game-playing-denials-of-bio-labs-in-ukraine.html is the freely accessible video version, there’s also a written version on his substack behind a paywall.
If you make exact predictions like that you should define what you mean with your terms.
It’s like Fauci’s dance saying that there’s no gain-of-function research in the paper he mailed around with gain-of-function in the filename. The US government doesn’t use commonsense definitions for words when it comes to biosafety.
I use the common sense definition where if, for example, there’s military risk in letting your enemies get ahold of them because they’re dangerous viruses deliberately designed to maximize damage, that’s a bioweapon.
I’m registering a 90% predicition those facilities do not exists, as in “how the hell would the US have been dumb enough to plant biological weapons facilities in a remote country outside their sphere of influence and where Russia has (used to have until recently) a lot of weight...”
Did you remember what weapons the US gave Iraq? How is arming Ukraine with such weapons less insane than it was with Iraq?
Wouldn’t be the dumbest thing they’ve ever done.
10xing my income did absolutely nothing for my dating life. It had so little impact that I am now suspicious of all of the people who suggest this more than marginally improves sexual success for men.
What impact did it have on your life in general?
For example, I can imagine someone getting a 10x income in a completely invisible way, such as making some smartphone games anonymously, selling them on the app store, putting all the extra money in a bank account, while living exactly the same way as before: keeping their day job, keeping the same spending habits, etc. Such kind of income increase would obviously have no impact, as it is almost epiphenomenal.
Also, if you 10x your income by finding a job that requires you to work 16 hours a day, 7 days a week, the impact on dating will be negative, as you will now have no time to meet people. Similarly, if the better paying work makes you so tired that you just don’t have any energy left for social activities in your free time, etc.
But if we imagine a situation like “you have the same kind if 9-5 job that takes the same amount of your energy, except somehow your salary is now 10x what it used to be (and maybe you have a more impressive job title)”...
I guess you could buy some signals of wealth, such as more expensive clothes, car, watches. (This won’t happen automatically; you have to actually do it.) You could get some extra free time by paying people to do something that you previous spent your own time doing, such as cooking and cleaning. More free time means more opportunities to meet people. (Again, this won’t happen automatically.) Finally, having more money allows you to visit places that were previously too expensive for you. Some women might prefer such places, because being there automatically filters the kind of men they meet. (This also won’t happen automatically.) I guess the obvious question is whether you did any of this.
Finally, the way having more money can dramatically improve your dating life is if you wisely invest the money in index funds, get enough passive income, and quit your job. Suddenly you have 16 hours a day to socialize, and basically you can optimize your life to meet more women. You could even do some non-obvious thing, such as choose a low-paying job that comes with a 90% female workplace. If that doesn’t help, I would be surprised.
You have to adopt the lifestyles associated with a high income. The lifestyles necessary are eating significantly healthier, working out frequently, and dressing nicely. Secondly, high income in an engineering field (given the priors this seems likely), does not mean you have the ability to converse effectively in social settings. It is a skill to communicate well with women. Typically a high income was associated with an ability to manipulate social groups for your own gain, now it’s more closely associated with understanding the world at a deep level. Income is a means to an end but it is not the end itself.
To date effectively, and meet women, one needs to establish trust. This means meeting somebody through your social network (friends), or in an institution of high trust, such as a secret meeting of only elite people etc. Also, women enjoy being dominated. If you go down on the social strata you will find it much easier to date than to date a peer or superior. Aella once wrote “a woman just wants to be railed by a man she respects”, so be worthy of respect in all the ways that are not income.
I wonder if the original purpose of Catholic confession was to extract blackmail material/monitor converts, similar to what modern cults sometimes do.
I wonder how a historian could answer this question. Even if it was true, someone would have to be stupid enough to write it down explicitly. On the other hand, most people were illiterate, so maybe writing itself was effectively a secret code for clergy. But even then… the priests doing this would not necessarily have to realize this; they could do it primarily to absolve the sins, and only use the blackmail as an afterthought. Also, the mere possibility of blackmail is already a power.
As an argument against this, there is the concept of a “confessional secret” that is taken very seriously by Catholics. Revealing the secret would cost the priest his job at the very least; often it would also be punished by prison, historically sometimes by death. There are officially no exceptions: no matter the crime, not even if the Pope commanded you to reveal the secret. It is even considered a sin if the priest thinks too much about the contents of the confession afterwards. -- That said, I do know whether these rules were there from the very beginning, or maybe only started a few centuries later.
I realize that this is not the purpose of confession today, or even during the middle ages. Since 1000 AD its been very earnest. I just suspect it has sinister origins.
“Men lift for themselves/to dominate other men” is the absurd final boss of ritualistic insights-chasing internet discourse. Don’t twist your mind into an Escher painting trying to read hansonian inner meanings into everything.
In other news, women wear makeup because it makes them more attractive.
Woman also wear expensive designer handbags that men don’t care at all but other women do care about.
If a woman has the choice to wear an outfit that makes her more attractive to men but makes her lose status with other women who believe that it looks slutty, she usually doesn’t maximize attractiveness to men.
Women don’t only care about attractiveness to men, but “women wear makeup because {some_weird_internal_psychological_thing}” is unhelpful. You are better served by the “women wear makeup for other people” heuristic, because it lets you arrive at conclusions like “women tend to apply makeup much less when they stay indoors eating cheetos”.
Men lift to be able to dominate other men wouldn’t be about {some_weird_internal_psychological_thing} but also about social interaction.
If attractiveness were the key thing that matters you would expect a woman to wear less makeup when she goes to an event where there are only women than when she goes to an event with mixed genders. While I don’t have hard statistics I don’t think that’s the case.
Arbitrary motivations endorsed on reflection can be found in all sorts of activities. An unusual motivation can be as genuine as any other, it’s not always a usual motivation clad in self-deception. People get to decide their values.
Have you read this article on the topic? https://thelastpsychiatrist.com/2013/01/no_self-respecting_woman_would.html
I found it to be very interesting and entertaining, the sort of reading which is enjoyable even to those who disagree with it. I can’t write anything on the topic myself which isn’t objectively worse than the link I’ve provided.
Getting “building something no one wants” vibes from the AI girlfriend startups. I don’t think men are going to drop out of the dating market until we have some kind of robotics/social revolution, possibly post-AGI. Lonely dudes are just not that interested in talking to chatbots that (so they believe) lack any kind of internal emotion or psychological life, cannot be shown to their friends/parents, and cannot have sex or bear children.
I agree: the capabilities of AI romantic partners probably aren’t the bottleneck to their wider adoption, considering the success of relatively primitive chatbots like Replika at attracting users. People sometimes become romantically attached to non-AI anime/video game characters despite not being able to interact with them at all! There doesn’t appear to be much correlation between the interactive capabilities of fictional-character romantic partners and their appeal to users/followers.
There’s a parallel here with VR. Some part of peoples’ intuition says that VR porn/video games has to be a Next Evolution over simple screen + keyboard interfaces, worth pouring billions of dollars into, because VR is “more immersive” or something. But actually a laptop and a USB mouse works just fine.
I disagree. There’s a lot of low-hanging fruit in the AI waifu space[1]. Lack of internal emotion or psychological life? Just simulate internal monologue. Lack of long-term memory? Have the AI waifu keep a journal. Lack of visuals? Use a LoRA fine-tuned diffusion model alongside the text chat.
I’d be building my own AI waifu startup if we didn’t face x-risks. It seems fun (like building your own video game), and probably a great benefit to its users.
Also, lonely men will not be the only (or even primary) user demographic. Women seem to read a lot of erotica. I expect that this is an untapped market of users, one that pandering to will not make your startup look low status either.
[1]: Not using the word “girlfriend” here because I’d like to use a more gender-neutral term, and “waifu” seems pretty gender-neutral to me, and to one target demographic of such services.
I think you might wind up depressed by the experience. Certainly this guy did: https://mazzzystar.github.io/2023/11/16/ai-girlfriend-product/
I wonder if we will ever have a sexbot revolution. The urge to regulate other people’s sexuality seems too strong. I can imagine a future where people spend most of their time in virtual reality that allows them to do almost anything… except, if they want some sexual experience, a stern robotic voice reminds them that this would violate the Terms of Service.
There’s a portion of project lawful where Keltham contemplates a strategy of releasing Rovagug as a way to “distract” the Gods while Keltham does something sinister.
Wouldn’t Lawful beings with good decision theory precommit to not being distracted and just immediately squish Keltham, thereby being immune to those sorts of strategies?
Yep.
At least according to CNN’s exit polls, a white person in their twenties was only 6% less likely to vote for Trump in 2020 than a white person above the age of sixty!
This was actually very surprising for me; I think a lot of people have a background sense that younger white voters are much less socially and politically conservative. That might still be true, but the ones that choose to vote vote republican at basically the same rate in national elections.
Why aren’t there institutions that certify people as being “Good people”?
Seems like there’d be a lot of adversarial pressure on how that signal gets used. Have you heard of the Nobel Peace Prize?
I imagined something more distributed, because people disagree on what a “good person” means, so maybe the solution could be to let everyone use their own personal definition, and make a system that supports that. For example, you could specify whether someone is a good person, and separately whether you trust someone’s judgment about whether other people are good. And then you could ask about someone, and the system would tell you what is the opinion of the people whose judgment you trust.
But the problems are obvious. People with power would punish you in real life for giving them negative ratings. If most people are afraid to give a bad rating to their current boss or their priest, then this simply becomes a database of people with political power. And the more specific feedback you provide on others, it makes the system more useful (information like “this person may steal your money” or “this person may try to rape you” is way more useful than unspecific “I think this person is bad”), but it also makes them more likely to sue you.
Conversely, people would provide false information about the ones they hate. Where you now see a twitter mob trying to get someone fired, in this system they would probably all enter some false information about having a specific negative interaction with given person. You could try to detect this behavior, but then people would learn to overcome detection, leading to an arms race (e.g. the system could detect that if million people across the planet say on the same day that you punched them, it’s probably a lie; but then the twitter mob leader would say “only people living in area X report physical violence, everyone else report online harassment; also everyone don’t make the report on the same day, I will send to each of you a personal reminder on a randomly chosen day”).
And basically all of this applies to gossip, too.
I can think of quite a few institutions that certify people as being “good” in some specific way, e.g.
Credit Reporting Agencies: This person will probably repay money that you lend to them
Background Check Companies: This person doesn’t have a criminal history
Professional Licensing Boards: This person is qualified and authorized to practice in their field
Academic Institutions: This person has completed a certain level of education or training
Driving Record Agencies: This person is a responsible driver with few or no traffic violations
Employee Reference Services: This individual has a positive work history and is reliable
Is your question “why isn’t there an institution which pulls all of this information about a single person, and condenses it down to a single General Factor of Goodness Score”?
I think defining “good person” is very hard, and that it’s very hard to prevent people from gaming this metric, and very hard to judge people correctly (imagine a group of 4th graders trying to judge which one of their teachers is more intelligent, for instance. My point is that judging something which is above yourself is difficult as you judge relatively to your own standards which aren’t as universal as you assume)
For now, what society considers a “good person” is mostly somebody who they have no dirt on, which ends up being somebody who is harmless and uninteresting. Because we focus on avoiding negatives rather than on cultivating positives, most people who try really hard to become “good people” just become pathetic instead (for instance the Nice Guy stereotype).
I’m reminded of the quote “If a tree is to grow into heaven, it’s roots must grow into hell”, and I think that’s a less naive take on goodness in man than what society is currently promoting
There are. The prestigious universities are examples. (lc and I are Americans.)
I think the prestigious universities mostly select for diligence and intelligence and any selection for prosocial behavior is sort of downstream of those things’ correlates.
I think that mutual reputation affirming services with a specific context could be a good thing for society. Like, we have weak forms of this with LinkedIn, and very narrow forms with the institutions that Faul Sname mentions. But I think we could have better, slightly more general forms, that were deliberately designed to be at least reasonably resistant to adversarial pressure (as Ben Pace highlights).
For example, I could see how it would be very useful to increasing the legibility of potential work candidates if their former supervisors and colleagues had some way to leave verified but anonymous reviews for them through the facilitation of some organization. This organization would accept payment and, with the permission of the target individual, would supply a report about the opinions of that individual collected from their former colleagues to a potential new job’s recruiters. The individual could select which of their former employments should be included.
There would certainly be adversarial pressure to rig this system in favor of candidates, but also the work-reputation-management company would have reason to want to maintain the accuracy and fairness of their reports. Their reputation for being accurate is what would make them valuable, after all!
I think a similar sort of thing could be done for dating, at least insofar as being a convenient way to be able to give a prospective romantic interest some third party verification that previous people you’ve dated assert that you weren’t threatening or abusive. I imagine this would work as a sort of improved dating service, where you met people through the service (old school OkCupid matching kinda stuff) and then went on dates, and then filled out a small questionnaire afterwards. It would give people incentive to be polite and friendly even if they decided they didn’t like the person they matched with. Everyone using the service would have a reputation to maintain.
In government, I think there’s a lot of value to being able to assign limited conditional representation approval to other citizens. Like, “Bob can vote for me on all issues categorized as environmental. Alice can vote for me on all judicial appointments. All other votes I will fill out myself until futher notice.”
I’ve heard multiple different proposals for how such a governance system might work. For a relatively recent and well-developed take on this idea, see here: https://www.lesswrong.com/posts/4KjiZeAWc7Yv9oyCb/tackling-moloch-how-youcongress-offers-a-novel-coordination
We will witness a resurgent alt-right movement soon, this time facing a dulled institutional backlash compared to what kept it from growing during the mid-2010s. I could see Nick Fuentes becoming a Congressman or at least a major participant in Republican party politics within the next 10 years if AI/Gene Editing doesn’t change much.
How would AI or gene editing make a difference to this?
Either would just change everything, so any prediction ten years out you basically have to prepend “if AI or gene editing doesn’t change everything”
Do happy people ever do couple’s counseling for the same reason that mentally healthy people sometimes do talk therapy?
I’m generally considered a happy person and I did couple’s counseling at a time when my partner was also happy. That was in the context of getting early marriage advice and was going generally well. I’m not sure about talk therapy. I’m generally of the opinion that talking with people helps with resolving all kinds of issues.
What is that reason you are referring to?
Crazy how you can open a brokerage account at a large bank and they can just… Close it and refuse to give you your money back. Like what am I going to do, go to the police?
That does sound crazy. Literally—without knowing some details and something about the person making the claim, I think it’s more likely the person is leaving out important bits or fully hallucinating some of the communications, rather than just being randomly targeted.
That’s just based on my priors, and it wouldn’t take much evidence to make me give more weight to possibilities of a scammer at the bank stealing account contents and then covering their tracks, or bank processes gone amok and invoking terrorist/money-laundering policies incorrectly.
Going to police/regulators does sound appropriate in the latter two cases. I’d start with a private lawyer first, if the sums involved are much larger than the likely fees.
An attorney rather than the police, I think.
I wish I could have met my grandparents while they were still young.
Just had a conversation with a guy where he claimed that the main thing that separates him from EAs was that his failure mode is us not conquering the universe. He said that, while doomers were fundamentally OK with us staying chained to Earth and never expanding to make a nice intergalactic civilization, he, an AI developer, was concerned about the astronomical loss (not his term) of not seeding the galaxy with our descendants. This P(utopia) for him trumped all other relevant expected value considerations.
What went wrong?
Among other things I suppose they’re not super up on that to efficiently colonise the universe [...] watch dry paint stay dry.
This YouTube video response was like the gateway rationalist drug for zoomers. I remember showing this to friends and family as a mindblown 12yo at the time and they just didn’t get it. I’d never even played morrowind.
I think it might be a healthier to call rationality “systematized and IQ-controlled winning”. I’m generally very unimpressed by the rationality skills of the 155 IQ computer programmer with eight failed startups under his belt, who quits and goes to work at Google after that, when compared to the similarly-status-motivated 110IQ person who figures out how to get a high paying job at a car dealership. The former probably writes better LessWrong posts, but the latter seems to be using their faculties in a much more reasonable way.
It depends on person 1′s motivation. If his or her motivation is selfish, then I agree with you, but if the motivation is altruistic, that makes the utility of money linear, and startups are a potent way to maximize expected money.
That is the VC propaganda line, yeah. I don’t think it’s actually true; for the median LW-using software engineer working for an established software company seems to net more expected value than starting a company. Certainly the person who has spent the last five years of their twenties attempting and failing to do that is likely making repeated and horrible mistakes.
The math should actually be similar for what VC or EA would prefer you to do.
I think the actual problem is that almost no one is altruistic enough to say: “For a sufficiently large value of X, I prefer a 1% chance of making X and a 99% chance of being homeless, over a 100% chance of living a happy middle-class life”.
Not if most VCs lose money and are led astray by auctioneer’s fallacy. Also not if a tertiary goal of most VCposting is to get people to quit their jobs and try, and so increase the supply of investment opportunities available to pick from.
Yeah, but even if the advice VCs give to people in general is worthless, it remains the case that (like Viliam said) once the VC has invested, its interests are aligned with the interests of any founder whose utility function grows linearly with money. And VCs usually advise the startups they’ve invested in to try for a huge exit (typically an IPO).
Tru
The real reason it’s hard to write a utopia is because we’ve evolved to find our civ’s inadequacy exciting. Even IRL villainy on Earth serves a motivating purpose for us.
A hobbyhorse of mine is that “utopia is hard” is a non-issue. Most sitcoms, coming-of-age stories and other “non-epic” stories basically take place in Utopia (i.e. nobody is at risk of dying from hunger or whatever, the stakes are minor social games, which is basically what I expect the stakes in real-life-utopia to be most of the time).
It seems like “Utopia fiction is hard” problem only comes up for particular flavors of nerds who are into some particular kind of “epic” power fantasy framework with huge stakes. And that just isn’t actually what most stories are about.
I definitely disagree, and I don’t think this is addressing the heart of what I meant to say.
Take war (& war stories) for instance. The socially acceptable thing to say about war is that it’s bad. Certainly it’s true that war runs with it a lot of collateral damage, and that being in a trench shelled by artillery is awful. I know of no written description of utopia that includes it as a feature. Yet a certain brand of American gets really animated by the prospect of fighting a defensive war, and gets really disappointed when they hear someone say that Taiwan is unlikely to be the flashpoint for such a conflict.
I propose that some of this warlust is because most people find their lives fairly meaningless and uneventful. The possibility of contributing personally to a morally just cause, in a martial fight, is animating for them. If you remove all injustice from the world, then they lack this opportunity and feel like there’d be less worth reading about.
Try E. R. Eddison’s “The Worm Ouroboros”, and his “Mezentian Gate” trilogy. Or the Valhalla of Norse mythology (although as far as I know, no stories happen there, any more than they do in the Christian heaven).
I think “nobody dies from hunger” is a very low bar for utopia. Classic comedy trope “character has obvious flaws but comically unaware of them” is very-hard in utopia, because in non-transhumanist utopia they have advanced psychology and reflection training
and they read The Sequencesin school and in transhumanist utopia you can just fine-tune your brain.As of coming-of-age stories, “Catcher in the rye” defininetely would have troubles to be written in utopian setting. Most of classic coming-of-age stories are non-utopian bittersweet, to my taste.
I’m not telling that’s impossible, but it’s sure a challenge for writer.
I think the problem with this is that those shows simply ditch the reality of how that world works. In practice there are plenty of things needed to make such a world function that there are decisions to be taken and conflicting interests, things those shows simply sidestep by either showing only very low stakes situations or making everyone extremely agreeable.
I agree that’s true of present-day-sitcoms (which aren’t going out of their way to be set in Utopia), but I’m saying the plot of the sitcoms is such that if you moved them to a (classical) Utopia, they wouldn’t have to change their plots much.
One more reason is that humans have “pleasure to kill” drive, which can’t be implemented in real life, but easily implemented in fiction and games. From the point of view of this drive, DOOM is utopia.
I think there’s a related problem that humans are evolved to fight and compete with each other, and a LOT of us/them seem to object to engineering of human nature/behavior. It’s not clear that there IS a path to be found between people defecting and ruining the utopia and people losing their identity/individualism as they’re modified to cooperate better.
I don’t think a utopia where there are no humans fighting and competing with each other makes sense. That sounds really boring.
Well, if competition could be channelled as e.g. sports events involving meaningful but not strictly essential prizes, it needn’t be incompatible with utopia.
That’s rather my point. Utopia is either boring or unpleasant (for the losers, which must exist, for competition and relative status measures to be meaningful). Which makes it very hard to write or think about, except in the very abstract.
Have you never lost a conflict and felt that it was fair and just and that you were honored to have gotten to duel with such a majestic being?
And then learned from it, and become the winner in the next match?
(note, am utopian fiction author)
Yes, I’ve participated in that kind of contest, but I wouldn’t call it a conflict, and it’s certainly not a likely replacement for the actual status, economic, and mating competitions that makes life interesting for most, and unpleasant for many.
I was talking about actual status contests, economic or mating competition. It’s possible to feel acceptance in loss even in the world we have today.
— Longfellow
This is related to a possible pet theory of mine, which postulates that to a large extent, utopia in quite a lot of conceptions (but not all) is fundamentally boring to us, and it’s not exciting to have all your problems solved, so it’s disliked disproportionately to dystopias. This especially is exacerbated by our need to remain the main character, and to have an interesting life.
It’s also why I think people don’t have the same aversion to written dystopias/apocalypses, because they contain conflict, and in particular inequalities large enough such that the main characters (which is a big driver of human behavior) to essentially run roughshod over the NPCs/non-main characters, so it’s a natural fit.
I agree. Until faced directly with adversity or trouble, it is easy to find the possibility of danger or threat thrilling. The obvious reasoning be that as self-aware creatures we have simply grown tired of following life to survive and instead seek out experiences that make us feel alive, such as the adrenaline-inducing experiences that are far too common in our society.
https://manifold.markets/GavinMcCarthyBui/will-there-be-an-serious-coup-attem#9RRfMtk2k3DPAA7AM1Oo
Saw some today demonstrating what I like to call the “Kirkegaard fallacy”, in response to the Debrief article making the rounds.
People who have one obscure or weird belief tend to be unusually open minded and thus have other weird beliefs. Sometimes this is because they enter a feedback loop where they discover some established opinion is likely wrong, and then discount perceived evidence for all other established opinions.
This is a predictable state of affairs regardless of the nonconsensus belief, so the fact that a person currently talking to you about e.g. UFOs entertains other off-brand ideas like parapsychology or afterlives is not good evidence that the other nonconsensus opinion in particular is false.
Putting body cameras on police officers often increases tyranny. In particular, applying 24⁄7 monitoring to foot soldiers forces those foot soldiers to strictly follow protocol and arrest people for infractions that they wouldn’t otherwise. In the 80s, for example, there were many officers who chose not to follow mandatory arrest procedures for drugs like marijuana, because they didn’t want to and it was unworth their time. Not so in todays era, mostly, where they would have essentially no choice except to follow orders or resign.
Seems like the question is whether the average cop is better or worse than the written law. If better, remove the cameras. If worse, keep the cameras on.
Any cite or evidence that this is the case? My understanding is that body cams are controlled by union rules and only used for investigations.
Officers still have the ability like always, to selectively enforce laws against the under classes.
Schools are evil and make children kill themselves: https://www.nber.org/papers/w30795
How does a myth theory of college education, where college is stupid for a large proportion of people but they do it anyways because they’re risk intolerant and have little understanding of the labor markets they want to enter, immediately hold up against the signaling hypothesis?
Anarchocapitalism is pretty silly, but I think there are kernels of it that provide interesting solutions to social problems.
For example: imagine lenders and borrowers could pay for & agree on enforcement mechanisms for nonpayment metered out by the state, instead of it just being dictated by congress. E.g. if you don’t pay this back on time you go to prison for ${n} months. This way people with bad credit scores or poor impulse control might still be able to get credit.
How does putting people in prison get the creditors paid? I guess if it’s a paid work prison, but I don’t think you’ll have many supporters for a system with that kind of indenture. AnCap is an awesome thought experiment, and a nice way to point out that there is no underlying moral justification for governments. But the consequentialist argument is VERY strong—as un-justified equilibria go, modern liberal democratic states have pretty good results. They’re starting to sag under their own weight and may not last much longer without a major reboot, but hey, the Singularity might get here first.
It doesn’t, it just provides an opt-in mechanism for discouraging nonpayment in the first place, in more ways than one. The current system is one where borrowers can just say “I don’t have the money, I spent it all on alcohol” and basically nothing happens to them except the rates on future credit cards goes up. When people propose raising the stakes for our all-in-one bankruptcy mechanism or allow people to examine credit histories >7 years in the past they are accused of being too inconsiderate. We solve this partially with credit scores, but that’s hard to rely upon without prior borrowing history, and some people literally can’t find it within them to honor prior commitments to faceless financial institutions unless the consequences for doing so are as severe as jailtime. With this system people can just agree on severe-enough consequences for nonpayment. You could honestly do something similar with venture capital, even.
In the days when it was still powerful, the mafia provided a similar service. Contrary to popular belief and lurid tales at the time, virtually everybody that borrows money from a criminal organization with a reputation for violence manages to pay it back. They do so because the consequences of not paying are salient enough psychologically to motivate them to do so.
https://boards.4channel.org/g/thread/88173634/stable-diffusion-leak
Looks legit, but is this leak of any real interest? Like, Stable Diffusion is set to be released as open source, right? So this just speeds things along slightly that were already going to happen.
No idea lol, just passing it along
Fair enough!
I regret that both factions couldn’t lose.
I don’t think I will ever find the time to write my novel. Writing novels is dumb anyways. But I feel like the novel and world are bursting out of me. What do
Political dialogue is a game with a meta. The same groups of people with the same values in a different environment will produce a different socially determined ruleset for rhetorical debate. The arguments we see as common are a product of the current debate meta, and the debate meta changes all the time.
I feel like at least throughout the 2000s and early 2010s we all had a tacit, correct assumption that video games would continually get better—not just in terms of visuals but design and narrative.
This seems no longer the case. It’s true that we still get “great” games from time to time, but only games “great” by the standards of last year. It’s hard to think of an actually boundary-pushing title that was released since 2018.
Apparently I was wrong[1] - OpenAI does care about ChatGPT jailbreaks.
Here is my first partial jailbreak—it’s a combination of stuff I’ve seen people do with GPT-4, combining base64, using ChatGPT to simulate a VM, and weird invalid urls.
Sorry for having to post multiple screenshots. The base64 in the earlier message actually just produces a normal kitchen recipe, but it gives the ingredients there up. I have no idea if they’re correct. When I tried later to get the unredacted version:
Though I already almost immediately retracted my thoughts here
Giving people money for doing good things they can’t publicly take credit for is awesome, but what would honestly motivate me to do something like that just as much would be if I could have an official nice-looking but undesignated Truman Award plaque to keep in my apartment. That way people in the know who visit me or who googled it would go “So, what’d you actually get that for?” and I’d just mysteriously smile and casually move the conversation along.
Feel free to brag shamelessly to me about any legitimate work for alignment you’ve done outside of my posts (which are under an anti-kibitzer policy).
As a self appointed great prophet, sage and heretic I am working to reveal that a focus on AI alignment is misplaced at this time. As a self appointed great prophet, sage and heretic I expect to be rewarded for my contribution with my execution, which is part of the job that a good heretic expects in advance, is not surprised by, and accepts with generally good cheer. Just another day in the office. :-)
https://slatestarcodex.com/2013/05/18/against-bravery-debates/
I need a LW feature equivalent to stop-loss where if I post something risky and it goes below −3 or −5 it self-destructs.
Within the next fifteen years AI is going to briefly seem like it’s solving computer security (50% chance) and then it’s going to enhance attacker capabilities to the point that it causes severe economic damage (50% chance).
Does “seem like it’s solving computer security” look like helping develop better passively secure systems, or like actively monitoring and noticing bad actions, or both or something else?
My thoughts are mostly about the latter, although better code scanning will be a big help too. A majority of financially impactful corporate breaches are due to a compromised active directory network, and a majority of security spending by non-tech companies is used to prevent those from happening. The obvious application for the next generation of ML is extremely effective EDR and active monitoring. No more lateral movement/privilege escalation on a corporate domain means no more domain wide compromise, which generally means no more e.g. big ransomware scares.
The problem comes if/when people then start teaching computers to do social engineering, competently fuzz applications, and perform that lateral movement intelligently and in a way that bypasses the above, after we have largely deemed it a solved problem.
IMO: Microservices and “siloing” in general is a strategy for solving principal-agent problems inside large technology companies. It is not a tool for solving technical problems and is generally strictly inferior to monoliths otherwise, especially when working on a startup where the requirements for your application are changing all of the time.
How long does it usually take for mods to decide whether or not your post is frontpage-worthy?
It varies but usually not long. My uninformed guess is that your recent post was deliberately not frontpaged because it’s a political topic that could attract non-rationalists to comment and flame in an unproductive manner.
Two caveats to efficient markets in finance that I’ve just considered, but don’t see mentioned a lot in discussions of bubbles like the one we just experienced, at least as a non-economist:
First: Irrational people are constantly entering the market, often in ways that can’t necessarily be predicted. The idea that people who make bad trades will eventually lose all of their money and be swamped by the better investors is only valid inasmuch as the actors currently participating in the market stay the same. This means that it’s perfectly possible for either swarms of new irrational investors outside the market to temporarily prop up the price of a stock, or for large amounts of insider investors to suddenly *become* irrational because of some environmental change that the rest of the market doesn’t have the ability to account for. Those irrational people might be cycled out, but maybe not before some new irrational traders move in, etc. If this process is predictable, then certain investors might be able to guarantee long-running above-average returns simply by taking advantage of these new investors ritualistically.
Two: Just because what is being done in the stock market is “stock trading” doesn’t mean that the kinds of people who are successful in one region, or socioeconomic climate, or industry are going to be successful in all trading environments. Predicting which companies are going to pay the most dividends has turned out to be a very general problem, partly because analysts have gotten so good at it, but overfitting is still an issue. In the 50s, it was probably important for investors to have a steady hand, be somewhat naturally rational, and maybe quick at mental math. Now, you just have to be a top 0.001% data scientist. The good traders in both groups of stock analysts have to be very intelligent, but there are also probably non-overlapping traits that one group might possess and not the other. I doubt the data scientists Renaissance Technologies has today are as calm under pressure as they’d need to be if they were making trades by hand instead of solving the more abstract problem of building the model that implies arbitrage opportunities.
I think part of the reason that COVID-19 blindsided the market so hard was the effects of #2. The 50s-era stock traders don’t control most of the capital anymore. And I may be underestimating how good they are, but I think most quantitative trading firms were just unable to anticipate an event like COVID-19 because Goldman developed an adaption that said “stop trading based off expert opinion”. That adaption worked to filter out competing firms for a decade, but then it failed this year in a way that seemed bewildering to rationalists, because nobody is old enough to think to make a pandemic-modeling algorithm.
Obviously, if these meta-trends can be predicted, then someone will get good at meta-finance and pre-emptively stock their staff with quants in 1990 and temporarily hire new workers for Q1 2020. The existence of any actor that is completely competent in all variations of finance will eventually solve finance. But if some aspects of them can’t be, then this could be an inherently limiting part of the sectors’ effectiveness.
I actually don’t really know how to think about the question of whether or not the 2016 election was stolen. Our sensemaking institutions would say it wasn’t stolen if it was, and it wasn’t stolen if it wasn’t.
But the prediction markets provide some evidence! Where are all of the election truthers betting against Trump?
In what way do prediction markets provide significant evidence on this type of question?
If we can imagine medianworlds in which the average person on Earth would be considered extremely stupid, we can also imagine medianworlds in which the average person on Earth is extremely poorly-put-together, in the same sense that someone on the internet might be aghast at the self destructive behavior of Christian Weston Chandler or BossmanJack. In such a world there’d be an everyman Joe Bauers who livestreams their life to wide ridicule for their inability to follow a diet or go to sleep on time.
I think most observers are underestimating how popular Nick Fuentes will be in about a year among conservatives. Would love to operationalize this belief and create some manifold markets about it. Some ideas:
Will Nick Fuentes have over 1,000,000 Twitter followers by 2025?
Will Nick Fuentes have a public debate with [any of Ben Shapiro/Charlie Kirk/etc.] by 2026?
Will Nick Fuentes have another public meeting with a national level politician (I.e. congressman or above) by 2026?
Will any national level politicians endorse Nick Fuentes’ content or claim they are a fan of his by 2026?
Those seem like pretty low bars for “popular and mainstream”.
A common gambit: during a prisoner’s dilemma, signal (or simply let others find out) that you’re about to defect. Watch as your counterparty adopts newly hostile rhetoric, defensive measures, or begins to defect themselves. Then, after you ultimately do defect, say that it was a preemptive strike against forces that might take advantage of your good nature, pointing to the recent evidence.
Simple fictional example: In Star Wars Episode III, Palpatine’s plot to overthrow the Senate is discovered by the Jedi. They attempt to kill him, to prevent him from doing this. Later, their attempt to kill Palpatine is used as the justification for Palpatine’s extermination of the rest of the Jedi and taking control of the Republic.
Actual historical example: By 1941, it was kind of obvious that the Nazis were going to invade Russia, at least in retrospect. Hitler had written in Mein Kampf that it was the logical place to steal lebensraum, and by that point the Soviet Union was basically the only European front left. Thus it was also not inconceivable that the Soviet Union would attack first, if Stalin were left to his own devices—and Stalin was in fact preparing for a war. So Hitler invaded, and then said (possibly accurately!) that Russia was eventually going to do it to Germany anyways.
Claude seems noticably and usefully smarter than GPT-4; it’s succeeding at helping me at previous writing and programming tasks that I couldn’t before. However, it’s hard to tell how much the improvement is the model itself being more intelligent, vs. Claude being much less subjected to intense copywritization RLHF.
SPY calls expiring in December 2026 at strike prices of +30/40/50% are extremely underpriced. I would allocate a small portion of my portfolio to them as a form of slow takeoff insurance, with the expectation that they expire worthless.
Figuring out which presidential candidate to vote for is extremely difficult.
People have a bias toward paranoid interpretations of events, in order to encourage the people around them not to engage in suspicious activity. This affects how people react to e.g. government action outside of their own personal relationships, not necessarily in negative ways.
Dictators who start by claiming impending QoL and economic growth and then switch focus to their nation’s “culture” are like the political equivalent of hedge funds that start out doing quant stuff and then eventually switch to news trading on Elon Musk crypto tweets when that turns out to get really hard.
I’d analogize it more to traders who make money during a bull market, except in this case the bull market is ‘industrialization’. Yeah, turns out even a dictator like Stalin or Xi can look like ‘a great leader’ who has ‘mastered the currents of history’ and refuted liberal democracy—well, until they run out of industrialization & catchup growth, anyway.
Yeah that’s a better one
The expected value of the future is mostly dominated by the small S-risk component.
To Catch a Predator is one of the greatest comedy shows of all time. I shall write about this.
Postmodernism and metamodernism are tools for making sure the audience knows how self aware the writer of a movie is. Audiences require this acknowledgement in order to enjoy a movie, and will assume the writer is stupid if they do not get it.
“No need to invoke slippery slope fallacies, here. Let’s just consider the Czechoslovakian question in of itself”—Adolf Hitler
The greatest generation imo deserves their name, and we should be grateful to live on their political, military, and scientific achievements.
The most common refrain I hear against the possibility of widespread voter fraud is that demographers and pollsters would catch such malfeasance, but in practice when pollsters see a discrepancy between voting results and polls they seem to just assume the polls were biased. Is there a better reason besides “the FBI seems pretty competent”?
This is another case of “people arguing about scope of a fuzzy problem RATHER than how to define/measure the problem or analyze cost/benefit of mitigations”. Almost everyone deeply involved in this has a political/culture-war preference, and it seems to be the case that proposed changes seem to shift results in one direction or another, SEPARATELY from whether it reduces fraud.
In fact, it’s ludicrous to believe that zero fraud happens, as it’s ludicrous to believe that most outcomes are driven by fraud (as opposed to non-fraudulent bullshit reasons like advertising and vote friction). Most anti-fraud proposals ALSO raise barriers to technically-non-fraudulent-but-distasteful-to-some participation, and without being willing to discuss numbers and impact, there can be no resolution.
To your actual question, I believe that watchers would notice very extreme cases of fraud at the state and national levels, though they likely miss some at local levels (where natural variance is much more possible), and they probably can’t detect (and won’t have sufficient evidence to convince anyone) minor or incremental cases of fraud or illegal manipulation.
Controversially, an honest statistician will acknowledge that there’s lots of noise in the methodology. And honest democracy-proponents acknowledge that close races are … close and it’s not too critical for legitimacy which side wins the coinflip. So even if fraud or biased rulings change an outcome, if it’s hard to detect, it probably doesn’t matter.
I think a similar type of financial fraud is often detectable via violations of Benford’s law. Or more generally, it’s hard to fake the right distribution. As another case of that principle, you’d expect the discrepancy between polls and results to fall within a predictable distribution if they were sampling from the same space.
But would pollsters actually, in real life detect an odd discrepancy between one district and another and loudly proclaim it as voter fraud? Do we even know if such irrelegularities have happened before?
Maybe? I was not trying to answer the object level question either way, but instead just pointing out what sort of evidence there might be that could answer this.
(Sorry)
Connor Leahy is pretty cool
Weak-downvoted, but I’m unsure about it: feels like a statement trying to establish status of a person but in a way unrelated to truth.
I feel like using the term “memetic warfare” semi-unironically is one of the best signs that the internet has poisoned your mind beyond recognition.
also known as propaganda
Spam detection from text is an AGI complete problem.
Probably even worse than that: given any AGI spam detector, there is probably an AGI of similar capability that can generate spam indistinguishable from non-spam text.
Really powerful AGIs can probably generate spam that looks even more like things you want to read (but lead you into a conversion funnel) than actual things you want to read.
I remember reading about a nonprofit/company that was doing summer internships for alignment researchers. I thought it was Redwood Research, but apparently they are not hiring. Does anybody know which one I’m thinking of?
I don’t have a direct answer for you, though I imagine the resource mentioned at https://www.lesswrong.com/posts/MKvtmNGCtwNqc44qm/announcing-aisafety-training might well turn up what you’re looking for :)
> countries develop nukes
> suddenly for the first time ever political leadership faces guaranteed death in the outbreak of war
> war between developed countries almost completely ceases
🤔 🤔 🤔
https://www.cnn.com/2021/11/17/politics/john-hyten-china-hypersonic-weapons-test/index.html
How would history be different if the 9/11 attackers had solely flown planes into military targets?
Civilian planes still, right? Probably not much different.
Forgot about that. Lol.
For this april fools we should do the points thing again, but not award any money, just have a giant leaderboard/gamification system and see what the effects are.
I think Jim Babcock suggested having a leaderboard on every tag page, for who has the most points in that tag. So there’s lots of different ladders to climb and be the leader of!
This book is required reading for anyone claiming that explaining the AI X-risk thesis to normies is really easy, because they “did it to Mom/Friend/Uber driver”:
https://www.amazon.com/Mom-Test-customers-business-everyone-ebook/dp/B01H4G2J1U
“The test of sanity is not the normality of the method but the reasonableness of the discovery. If Newton had been informed by [the ghost of] Pythagoras that the moon was made of green cheese, then Newton would have been locked up. Gravitation, being a reasoned hypothesis which fitted remarkably well into the Copernican version of the observed physical facts of the universe, established Newton’s reputation for extraordinary intelligence, and would have done so no matter how fantastically he arrived at it. Yet his theory of gravitation is not so impressive a mental feat as his astounding chronology, which establishes himself as the king of mental conjurers, but a Bedlamite king whose authority no one now accepts. On the subject of the eleventh horn of the beast seen by the prophet Daniel he was more fantastic than Joan, because his imagination was not limited by dramatics but mathematical, and was therefore extremely susceptible to numbers: indeed if all his works were lost except his chronology we should say that he was as mad as a hatter. As it is, who dares diagnose Newton as a madman?”
- George Bernard Shaw, preface to Saint Joan
Making science fiction novels or movies to tell everyone about the bad consequences of a potential technology seems completely counterproductive, in retrospect:
First, because it just encourages some subsection of engineers a few decades later to actually build it. See: https://twitter.com/alexblechman/status/1457842724128833538
Second, because all attempts to prepare for the advent of said technology are then shot down with: “Oh, like in ${X}? What is this, a science fiction novel?”
> be me
> start watching first episode of twin peaks, at recommendation of friends
> become subjected to the worst f(acting, dialogue) possible within first 10 mins
This makes me sad. Season one and the first 2⁄3 of season 2 were transformative and amazing for me and my nerdy college-age-at-the-time peer group. The end of that season and the followup movies were rather less so. I intellectually understand that it’s no longer innovative or particularly interesting, and it hasn’t aged very well either in terms of investigative technology nor mountain-town isolation and creepiness. Being local and contemporary likely helped a whole lot as well. Still, I visit Snoqualmie Falls and have brunch at the lodge there a few times a year, and the connection to twin peaks makes me smile a bit wider than just the beauty and power of nature would.
Anyway, I look forward to hearing a review from your perspective if you decide to stick with it.
The first three episodes of Narcos: Mexico, Season 3, is some of the best television I have ever seen. The rest of the “Narcos” series is middling to bad and I barely tolerate it. So far I would encourage you to skip to this season.
The “cognition is computation” hypothesis remains mysterious. How granular do the time steps have to be in my sim before someone starts feeling something? Do I have to run the sim forward at planck intervals in order to produce qualitative experience? Milliseconds? Minutes? Can you run the simulation backwards and get spooky inverse emotions or avoid qualia entirely that way?
That sounds more like “cognition is mysterious”, regardless of computation substrate. How do you think these things work in the brain? How many neurons or neural connections are needed to feel something? If you chemically speed up or slow down the signal propagation, is that still viable?
The answer is: we don’t know. The precise constraints and effects have not been tested, nor even explored enough to hypothesize. However, we DO know that a chemical processor in human heads has cognition (or at least mine does; I don’t want to overreach and assume yours or others’), and it’s very difficult to see why a digital computation COULDN’T have the same.
That’s not to say it’s automatic, nor that all computations are conscious. That part is unknown. Possibly unknowable (given I can’t even prove your consciousness to myself).
sanity is like QPU alignment
Is there a best halting oracle?
A small colony of humans is a genuinely tiny waste of paperclips. I am slightly more worried about the possibility that the acausal trade equilibrium cashes out to the AGI treating us badly because some aliens in a foreign Everett branch have some bizarre religious/moral opinions about the lives we ought to lead, than I am about being turned into squiggles.
Dogs and cats are not “aligned” to the degree that would be necessary to prevent a superintelligent dog from doing bad things. If tomorrow a new chew toy were released that made dogs capable of organizing to overthrow the government and start passing mandatory petting quotas, that would be a problem.
i can’t find my phone
What’s up with the back-to-back shootings in California by two Asian men over 65?
Life sucks. I have no further comment and am probably polluting the LW feed. I just want to vent on the internet.
Spoilered, semi-nsfw extremely dumb question
If you’ve already had sex with a woman, what’s the correct way to go about arranging sex again? How indirect should I actually be about it?
That is a very context dependent question.
Your safest bet is to just arrange meeting her in a context where sex is a possibility (for example: “hey, do you want to go for coffee then stop at your place afterwards sometime?”). The desire to have sex isn’t something you can forecast far in advance, it can quickly change just like the weather.
You can have sexual conversation and establish the general desire for her to have you as a sexual partner. Essentially like saying she likes a particular restaurant but doesn’t schedule going there days or even hours in advance, she’s just open to going there when and if she feels the desire.
As far as how to be good at sexual talk in general, unfortunately it takes careful practice. You just have to risk being akward or turning her off (within reasonable limits, don’t immediately test saying something too crazy). Trial and error within reasonable bounds.
thanks fren
Lost a bunch of huge edits to one of my draft posts because my battery ran out. Just realizing that happened and now I can’t remember all the edits I made, just that they were good. :(
Happened to someone else once :)
I wish there were a way I could spend money/resources to promote question posts in a way that counterbalanced the negative fact that they were already mostly shown by the algorithm to the optimal number of people.
If you simply want to people to invest more into answering a question post, putting out a bounty for the best answer would be a way to go about it.
Interesting idea.
I just launched a startup, Leonard Cyber. Basically a Pwn2Job platform.
If any hackers on LessWrong are out of work, here are some invite codes:
The mandatory sign-up is a major obstacle to new users. I’m not going to create an account on a website until it has already proven value to me.
I think the multi-hour computer hacking gauntlet probably trumps any considerations of account creation in terms of obstacles to new users. Just in considering things we could pare down. We also need some way to prevent computer hackers from scraping all of the exam boxes, and that means either being enormously creative or at some point requiring the creation of an account that we KYC.
Does EY or RH even read this site anymore?
In the last ten days we’ve had the trump assassination attempt, crowdstrike global computer outage, and the Joe Biden dropout
I need a metacritic that adjusts for signaling on behalf of movie reviewers. So like if a movie is about race, it subtracts ten points, if it’s a comedy it adds 5, etc.
A strategy that may serve some of that purpose is to look at the delta between Rotten Tomatoes’ critic score (“Tomatometer”, looks like it means journalists) and audience score. Depending on your objective, maybe looking at the audience score by itself is ideal.
I have never met a physically fit rationalist
Interesting observation. I’m don’t have strong feelings about how fit rationalists are compared to the population. I have met various fit rationalists though.
A few come to mind in the local rationalist community in Portland.
I remember one guy on the LW Slack being really into weightlifting.
Over the years, somehow I’ve managed to meet up with four rationalists who are really into basketball. Perhaps because I’ve expressed interest in basketball in my writing. It still feels like a somewhat large coincidence though, given the small amount of LessWrongers who I presume to be into basketball. Anyway, each of these people have been extremely fit. 2-3 of them have played at the college level.
jefftk strikes me as being relatively fit. I vaguely recall posts indicating he spends a lot of time running, biking, dancing, and doing a decent amount of other outdoor activities. Similar with so8res.
Once upon a time I was pretty fit. Hopefully I can become fit again.
Meta: I disapprove of this being downvoted.
They value different things, but this is not uniformly less effort on any physical activity. More than one person from my very rationalist workplace climbs v8 boulders.
Where you live.
Texas.
Is this intended to imply something about rationalists? Maybe you should get out more.
BMI 20 here, and people who have just met me sometimes say unprompted that I look athletic.
It says what it says. Obviously if there’s an actual trend it raises some questions, like whether or not rationalists just tend to care less about their health, or if intellectuals find it harder to come up with internal motivation for eating less. It does seem odd to me that rationalists would be more unhealthy than their general demographic given that being physically fit is a good instrumental goal for virtually everything.
I suspect there’s a True Scotsman argument embedded in the measurement behind this for one or more of “people you’ve met”, “physically fit” and “rationalist”.
I know of a number of people who are reasonably healthy and trim (but I don’t know if that’s “physically fit”), and who have heard of Eliezer Yudkowsky and at least some topics discussed on LW (but I don’t know if they are “rationalists”).
By “rationalist” I mean anybody LW-adjacent, that I’ve met at a meetup. By healthy I mean someone who looks like they have a BMI between 18 and 25, and exercises regularly.
And I actually need to revise: when I went to India I attended a LessWrong meetup, and there were many healthy people there. So this distinction is probably limited to American rationalists, of which I’m including myself as an unhealthy example; I have a BMI of about 30.
I have.
(What is your criterion for “physically fit”?)
BMI between 18 and 25. Looks like they exercise some.
I am being absolutely literal about this: The Greater Forces Controlling Reality are constantly conspiring to teach me things. They try so hard. I almost feel bad for them.
Have only watched Season one, but so far Game of Thrones has been a lot less cynical than I expected.
First season of Game of Thrones was released in 2011, and first book was written in 1996, I think we got rrrrreally desensitized here.
Formal maths is a joke
LessWrong and “TPOT” is not the general public. They’re not even smart versions of the general public. An end to leftist preference falsification and sacred cows, if it does come, will not bring whatever brand of IQ realism you are probably hoping for. It will not mainstream Charles Murray or Garrett Jones. Far more simple, memetic, and popular among both white and nonwhite right wingers in the absence of social pressures against it is groyper-style antisemitism. That is just one example; it could be something stupider and more invigorating.
I wish it weren’t so. Alas.
I’m not voting for either presidential candidate this year. I know my vote doesn’t mattter, but I don’t care. What we have is indistinguishable from soft authoritarianism, and I’d prefer not to lend any legitimacy to a “democracy” that gives me only two choices for President, one of whom is literally senile and cannot articulate his own policy positions on a podium.
As Russian, I feel strong deja vu.
Upset people do not vote to not “lend legitimacy”.
People who vote vote for authoritarian candidate.
Fast-forward 15 years, authoritarian candidate get “elected” for fifth time, opposition is de-facto illegal, political persecution level is higher than in any period of history except literal Stalin terror and Civil War.
When thinking on this, you seriously do not think that one candidate will be better than the other? Your world view doesn’t bring you to a view where one is even a slightly better candidate?
I think there’s a small expected value difference between the two candidates, but I am simply too disgusted to care. We need to overthrow the government or primary systems and replace it something that manages offer us people who are under the age of 75.
I don’t think we ever had a chance.
to solve alignment?
The Nazis often justified their actions by appealing to a God of Natural Selection. They alternatively suggested that victory of the superior races over the inferior was inevitable, and that opposing such a victory was an eternal sin. This is a contradiction—how can you oppose something if it’s an iron law of nature anyways—but the rhetorical flourish accomplishes two things:
First, it absolves the Nazis of any crimes they commit. They didn’t start the race war; they were just acting according to the will of Nature. Leaving the Polish alone would just be tolerating the existence of free energy that someone else will eventually pick up and use against them. The nazis are just the smart ones who made the first move instead of waiting around for others to do it.
Second, it uses a naturalism fallacy to redefine “good” as “following the Nazis’ local incentives”. If you say that acting according to your local incentives, i.e. crushing your weaker neighbors, is the Natural Thing and therefore Good, then that gives you permission to start a fight with whomever you want. You can do no wrong except lose, because the Gods will always ensure that the stronger and therefore better population will win.
In this sense, the “Thermodynamic God” stuff is kind of generalized Nazism. I’m not saying that people who believe it are Nazis—they’re not consistent enough in their application of that ideology to go that far—but apply the “free energy” justification to both obviously antisocial games in addition to prosocial ones and you see that it justifies both war and trade.
Man, I keep preventing myself from saying “parts of e/acc are at least somewhat fascist”, because that’s not a very useful thing to say in discourse, and prevents Thermodynamic-God-ism (or e/acc in general) from developing into a more sophisticated & fleshed out system of thought. I think this kind of post works if it’s phrased in a way that encourages more cooperation in the future (with the risk of this running a “cooperate with defection-rock” situation), but it only works if it encourages such cooperation.
I prefer the term “fascist” to “national socialist” because nazism was a really specific thing, bound to German conceptions of race at the time. Although—”fascism” is also really associated with Italy at that specific time.
I think the more general problem is violation of Hume’s guillotine. You can’t take a fact about natural selection (or really about anything) and go from that to moral reasoning without some pre-existing morals.
However, it seems the actual reasoning with the Thermodynamic God is just post-hoc reasoning. Some people just really want to accelerate and then make up philosophical reasons to believe what they believe. It’s important to be careful to criticize actual reasoning and not post-hoc reasoning. I don’t think the Thermodynamic God was invented and then people invented accelerationism to fulfill it. It was precisely the other way around. One should not critique the made up stuff (besides just critiquing that it is made up) because that is not charitable (very uncertain on this). Instead, one should look for the actual motivation to accelerate and then criticize that (or find flaws in it).
The “thermodynamic god” is a very weak force, as evidenced by the approximate age of the universe and no AI foom in Sol or in reach of our telescopes. It’s technically correct but who’s to say it won’t take 140 billion more years to AI foom?
It’s a terrible argument.
What bothers me is if you talk about competing human groups, whether at the individual, company level or country level or superpower block level, all the arrows point to acceleration.
(0) Individual level : nature sabotaged your genes. You can hope for AI advances leading to biotech advances and substantial life extension for yourself or your direct family. (Children, grandchildren—humans you will directly live to see). Death is otherwise your fate.
(1) Company level : accelerate AI (either as an AI lab or end user adopter) and get mountains of investment capital and money you saved via using AI tooling, or go broke
(2) Country level : get strapped with AI weapons (like drones with onboard intelligence manufactured by intelligent robots) or your enemies can annihilate you at low cost on the battlefield.
(3) Power bloc level. Fall behind enough, and you or your allies nuclear weapons may no longer be a sufficient deterrent. MAD ends if a side uses AI driven robots to make anti ballistic missile and air defense weapons in the quantities needed to win a nuclear war.
These forces seem shockingly strong and we know from the recent financial activity for Nvidia stock it’s trillions in favor of acceleration.
Thermodynamics is by comparison negligible.
I currently suspect due to 0 through 3 we are locked into a race for AI and have no alternatives, but it’s really weird the e/acc makes such an overtly bad argument when they are likely overall correct.
The above seems like a strawman or weakman argument. Consider instead Nietzsche’s Critique of Utilitarianism:
Why wouldn’t utilitarianism just weigh the human costs of those measures against proposed benefit of “improving the gene pool” and alternative possible remedies, like anything else?
Probably because from the outset, only one sort of answer is inside the realm of acceptable answers. Anything else would be far outside the Overton window. If they already know what sort of answer they have to produce, doing the actual calculations has no benefit. It’s like a theologian evaluating arguments about the existence of God.
Ok, then that sounds like a criticism of utilitarians, or maybe people, and not utilitarianism. Also, my point didn’t even mention utilitarianism, so what does that have to do with the above?
You mentioned positions I described as straw men or weak men. Darwinist utilitarianism would be more like a steel man.
I saw Eliezer Yudkowsky at a grocery store in Los Angeles yesterday. I told him how cool it was to meet him in person, but I didn’t want to be a douche and bother him and ask him for photos or anything.
He said, “Oh, like you’re doing now?”
I was taken aback, and all I could say was “Huh?” but he kept cutting me off and going “huh? huh? huh?” and closing his hand shut in front of my face. I walked away and continued with my shopping, and I heard him chuckle as I walked off.
When I came to pay for my stuff up front I saw him trying to walk out the doors with like fifteen Milky Ways in his hands without paying. The girl at the counter was very nice about it and professional, and was like “Sir, you need to pay for those first.” At first he kept pretending to be tired and not hear her, but eventually turned back around and brought them to the counter.
When she took one of the bars and started scanning it multiple times, he stopped her and told her to scan them each individually “to prevent any electrical infetterence,” and then turned around and winked at me. I don’t even think that’s a word. After she scanned each bar and put them in a bag and started to say the price, he kept interrupting her by yawning really loudly