yet we still don’t have anything close to a unified theory of human mating, relationships, and child-rearing that’s better.
We even seem to have a collective taboo against developing such theory, or even making relatively obvious observations.
yet we still don’t have anything close to a unified theory of human mating, relationships, and child-rearing that’s better.
We even seem to have a collective taboo against developing such theory, or even making relatively obvious observations.
I approve of the militant atheism, because there are just too many religious people out there, so without making a strong line we would have an Eternal September of people joining Less Wrong just to say “but have you considered that an AI can never have a soul?” or something similar.
And if being religious is strongly correlated with some political tribe, I guess it can’t be avoided.
But I think that going further than that is unnecessary and harmful.
Actually, we should probably show some resistance to the stupid ideas of other political tribes, just to make our independence clear. Otherwise, people would hesitate to call out bullshit when it comes from those who seem associated with us. (Quick test: Can you say three things the average Democrat believes that are wrong and stupid? What reaction would you expect if you posted your answer on LW?)
Specifically on trans issues:
I am generally in favor of niceness and civilization, therefore:
If someone calls themselves “he” or “she”, I will use that pronoun without thinking twice about it.
I disapprove of doxing in general, which extends to all speculations about someone’s biological sex.
But I also value rationality and free speech, therefore:
I insist on keeping an “I don’t know, really” attitude to trans issues. I don’t know, really. The fact that you are yelling at me does not make your arguments any more logically convincing.
No, I am not literally murdering you by disagreeing with you. Let’s tone down the hysteria.
There are people who feel strongly that they are Napoleon. If you want to convince me, you need to make a stronger case than that.
I specifically disagree on the point that if someone changes their gender, it retroactively changes their entire past. If someone presented as male for 50 years, then changed to female, it makes sense to use “he” to refer to their first 50 years, especially if this is the pronoun everyone used at that time. Also, I will refer to them using the name they actually used at that time. (If I talk about the Ancient Rome, I don’t call it Italian Republic either.) Anything else feels like magical thinking to me. I won’t correct you if you do that, but please do not correct me, or I will be super annoyed.
Just some quick guesses:
If you have problems with willpower, maybe you should make your predictions explicit whenever you try to use it. I mean, as a rationalist, you are already trying to be better calibrated, so you could leverage the same mechanism into supporting your willpower. If you predict a 90% success of some action, and you know that you are right, in theory you should feel small resistance. And if you predict a 10% success, maybe you shouldn’t be doing it? And it helps you to be honest to yourself.
(This has a serious problem, though. Sometimes the things with 10% chance of success are worth doing, if the cost is small and the potential gain large enough. Maybe in such cases you should reframe it somehow. Either bet on large numbers “if I keep doing X every day, I will succeed within a month”, or bet on some different outcome “if I start a new company, there is a 10% chance of financial success, and a 90% chance that it will make a cool story to impress my friends”.)
This also suggests that it is futile to use willpower in situations where you have little autonomy. If you try hard, and then an external influence ruins all your plans, and this was all entirely predictable, you just burned your internal credibility.
(Again, sometimes you need at least to keep the appearance of trying hard, even if you have little control over the outcome. For example, you have a job where the boss overrides all your decisions and thereby ruins the projects, but you still need the money and can’t afford to get fired. It could help to reframe, to make the bet about the part that is under your control. Such as “if I try, I can make this code work, and I will feel good about being competent”, even if later I am told to throw the code away because the requirements have changed again.)
This also reminds me about “goals vs systems”. If you think about a goal you want to achieve, then every day (except for maybe the last one) is the day when you are not there yet; i.e. almost every day is a failure. Instead, if you think about a system you want to follow, then every day you have followed the system successfully is a success. Which suggests that willpower will work better if you aim it at following a system, and stop thinking about the goal. (You need to think about the goal when you set up the system, but then you should stop thinking about it and only focus on the system.)
The strategy of “success spiral” could be interpreted as a way to get your credibility back. Make many small attempts, achieve many small successes, then attempt gradually larger things. (The financial analogy is that when you are poor, you need to do business that does not require large upfront investments, and gradually accumulate capital for larger projects.)
Perhaps the “decisions” that happen in the brain are often accompanied by some change in hormones (I am thinking about Peterson saying how lobsters get depressed after they lose a fight), so we can’t just willpower them away. Instead we need to find some hack that reverts the hormonal signal.
Sometimes just taking a break helps, if the change in hormones is temporary and gets restored to the usual level. Or we can do something pleasant to recharge (eat, talk to friends). Or we can try working with unconsciousness and use some visualization or power poses or whatever.
There is an ACX article on “trapped priors”, which in the Ayn Rand analogy would be… uhm, dunno.
The idea is that a subagent can make a self-fulfilling prophecy like “if you do X, you will feel really bad”. You use some willpower to make yourself do X, but the subagent keeps screaming at you “now you will feel bad! bad!! bad!!!” and the screaming ultimately makes you feel bad. Then the subagent says “I told you so” and collects the money.
The business analogy could be betting on company internal prediction market, where some employees figure out that they can bet on their own work ending up bad, and then sabotage it and collect the money. And you can’t fire them, because HR does not allow you to fire your “best” employees (where “best” is operationalized as “making excellent predictions on the internal prediction market”).
Parts of human mind are not little humans. They are allowed to be irrational. It can’t be rational subagents all the way down. Rationality itself is probably implemented as subagents saying “let’s observe the world and try to make a correct model” winning a reputational war against subagents proposing things like “let’s just think happy thoughts”.
But I can imagine how some subagents could have less trust towards “good intentions that didn’t bring actual good outcomes” than others. For example, if you live in an environment where it is normal to make dramatic promises and then fail to act on them. I think I have read some books long ago claiming that children of alcoholic parents are often like that. They just stop listening to promises and excuses, because they have already heard too many of them, and they learned that nothing ever happens. I can imagine that they turn this habitual mistrust against themselves, too. That “I tried something, and it was a good idea, but due to bad luck it failed” resembles too much the parent saying how they had the good insight that they need to stop drinking, but only due to some external factor they had to drink yet another bottle today. Shortly, if your environment fails you a lot, as a response you can become unrealistically harsh on yourself.
Another possible explanation is that different people’s attention is focused on different places. Some people pay more attention to the promises, some pay more attention to the material results, some pay more attention to their feelings. This itself can be a consequence of the previous experience with paying attention to different things.
Fair point. (I am not convinced by the argument that if the AI’s are trained on human texts and feedback, they are likely to end up with values similar to humans, but that would be a long debate.)
i want to read his nonfiction
It would have been nice to read A Journal of the Plague Year during covid.
Once your conspiracy gets large enough, chances are some member will be able to take care of the legal issues if they arise, by whatever means necessary.
(It’s like starting a company: the critical part is growing to the point where you can afford ramen and a good lawyer. You want to get there as fast as possible. Afterwards you can relax and keep growing slowly, if you wish.)
Imagine that a magically powerful AI decides to set a new political system for humans and create a “Constitution of Earth” that will be perfectly enforced by local smaller AIs, while the greatest one travels away to explore other galaxies.
The AI decides that the most fair way to create the constitution is randomly. It will choose a length, for example 10000 words of English text. Then it will generate all possible combinations of 10000 English words. (It is magical, so let’s not worry about how much compute that would actually take.) Out of the generated combinations, it will remove the ones that don’t make any sense (an overwhelming majority of them) and the ones that could not be meaningfully interpreted as “a constitution” of a country (this is kinda subjective, but the AI does not mind reading them all, evaluating each of them patiently using the same criteria, and accepting only the ones that pass a certain threshold). Out of the remaining ones, the AI will choose the “Constitution of Earth” randomly, using a fair quantum randomness generator.
Shortly before the result is announced, how optimistic would you feel about your future life, as a citizen of Earth?
Saying the (hopefully) obvious, just to avoid potential misunderstanding: There is absolutely nothing wrong with writing something for a smaller group of people (“people working in this space”), but naturally such articles get less karma, because the number of people interested in the topic is smaller.
Karma is not a precise tool to measure the quality of content. If there were more than a handful of votes, the direction (positive or negative) usually means something, but the magnitude is more about how many people felt that the article was written for them (therefore highest karma goes to well written topics aimed at the general audience).
My suggestion is to mostly ignore these things. Positive karma is good, but bigger karma is not necessarily better.
I apologize. I spent some time digging for ancient evidence… and then decided against publishing it.
Short version is that someone said something that was kinda inappropriate back then, and would probably get an instant ban these days, with most people applauding.
Going by today’s standards, we should have banned Gwern in 2012.
And I think that would have been a mistake.
I wonder how many other mistakes we made. The problem is, we won’t get good feedback on this.
Emotions are about reality, but emotions are also a part of reality, so we also have emotions about emotions. I can feel happy about some good thing happening in the outside world. And, separately, I can feel happy about being happy.
In the thought experiments about wireheading, people often say that they don’t just want to experience (possibly fake) happy thoughts about X; they also want X to actually happen.
But let’s imagine the converse: what if someone proposed a surgery that would make you unable to ever feel happy about X, even if you knew that X actually happened in the world. People would probably refuse that, too. Intuitively, we want to feel good emotions that we “deserve”, plus there is also the factor of motivation. Okay, so let’s imagine a surgery that removes your ability to feel happy about X, but solves the problem of motivation by e.g. giving you an urge to do X. People would probably refuse that, too.
So I think we actually want both the emotions and the things the emotions are about.
Welp, this was a short list.
Speaking only for myself, I can agree with the abstract approach (therefore: upvote), but I am not familiar with any of the existing projects mentioned in the article (therefore: no vote; because I have no idea how useful the projects actually are, and thus how useful is the list of them).
Library in the sense of “we collect texts written by other people” is: The Best Textbooks on Every Subject
I would like to see this one improved; specifically to have a dedicated UI where people can add books, vote on books, and review them. Maybe something like “people who liked X also liked Y”.
Also, not just textbooks, but also good popular science books, etc.
if you ask mathematicians whether
ZFC + not Consistent(ZFC)
is consistent, they will say “no, of course not!”
I suspect than many people’s intuitive interpretation of “consistent” is ω-consistent, especially if they are not aware of the distinction.
I find it difficult to make distinct categories, but there seem to be two dimensions along which to classify relations:
How intense is the relation / how much we “click” emotionally and intellectually.
Whether the relation is expected to survive the change of current context.
(Even this is not a clear distinction, because “my relatives” is kinda contextual, but the context is there forever.)
Mapping to your system: close friends = high intensity context independent; friendly acquaintances = high intensity contextual; acquaintances = low intensity contextual.
One quadrant seems to be missing, but maybe that makes sense: if the relation is low intensity, why would people bother to keep it outside of the context where it originated.
I agree. The best advertisement for having kids is to see other people having kids. Not only because people instinctively copy others, but also because you can ask the parents the things you are curious about, or you can try to babysit their kids to get an idea what it would be like to have your own kids.
Also, the more places are parent-friendly, the less costly it is to become a parent. If your friends mostly socialize in loud places with lots of alcohol, starting a family will make you socially isolated, because you would not want to bring your kids to places like that. If instead your friends meet at a park, you can keep your social life and bring your kids along with you.
If many people meet at the same place, it can make sense to have a room specifically for kids, at least with some paper and crayons, so that the kids can play there and leave their parents alone for a moment. Also, one big box where people can bring toys they no longer need at home.