Programmer, rationalist, chess player, father, altruist.
cata
It seems like Musk in 2018 dramatically underestimated the ability of OpenAI to compete with Google in the medium term.
Thanks for not only doing this but noting the accuracy of the unchecked transcript, it’s always hard work to build a mental model of how good LLM tools are at what stuff.
I don’t know whether this resembles your experience at all, but for me, skills translate pretty directly to moment-to-moment life satisfaction, because the most satisfying kind of experience is doing something that exercises my existing skills. I would say that only very recently (in my 30s) do I feel “capped out” on life satisfaction from skills (because I am already quite skilled at almost everything I spend all my time doing) and I have thereby begun spending more time trying to do more specific things in the world.
I worked at Manifold but not on Love. My impression from watching and talking to my coworkers was that it was a fun side idea that they felt like launching and seeing if it happened to take off, and when it didn’t they got bored and moved on. Manifold also had a very quirky take on it due to the ideology of trying to use prediction markets as much as possible and making everything very public. I would advise against taking it seriously as evidence that an OKC-like product is a bad idea or a bad business.
Why is it cheaper for individuals to install some amount of cheap solar power for themselves than for the grid to install it and then deliver it to them, with economies of scale in the construction and maintenance? Transmission cost?
If you installed it in a preschool and it successfully killed all the pathogens there wouldn’t be essentially no effect.
Superficially, human minds look like they are way too diverse for that to cause human extinction by accident. If new ideas toast some specific human subgroup, other subgroups will not be equally affected.
Why do you feel so strongly about using so much eye contact in normal conversations? I sometimes make eye contact and sometimes don’t and that seems fine.
I agree with your sentiment that being very uncomfortable with eye contact is probably an indication of some other psychological thing you could work on, but it sounds like you maybe feel more strongly about it than that.
I played General Anderson and also wrote that note. My feeling is that this year seemed more “game-like” and less “ritual-like” than past years, but the “game” part suffered for the reasons I mentioned above, and the combination to me felt awkward. Choosing to emphasize either the “game” nature or the “ritual” nature seems to have some pros and cons. Since participating in the game inevitably made me curious about the choices involved, I will be interested to hear the LW team’s opinion on this in the retrospective.
A new promising game was just released, Maxwell’s Puzzling Demon. It looks like it goes deep with clever puzzles.
This post was difficult to take seriously when I read it but the “clown attack” idea very much stuck with me.
I think you should go to college if it sounds pleasant and fulfilling to go to one of the colleges you could go to (as Saul stated colleges have many fancy amenities) and you are OK with sacrificing:
The cost of the preparatory work you need to do to be admitted at that college.
The cost of the tuition itself.
4+ years of your career and adult life.
in order to do something pleasant and fulfilling. You should also go to college if you don’t have any plan to get a job you like without a college degree, but you do have a plan to do it with a college degree, since it’s very important to get a job you like. Although, given that college is a huge investment, maybe you should have made that plan, or be making it.
If you aren’t looking forward to spending 4 more years in school a lot, and you could get a reasonable job without going to college, I think it would be crazy to go to college.
I don’t think most people are likely to be confused about which of these groups they are in. If Saul is confused I apologize but I think he must be a rare case.
The other arguments Saul made in his opening statement about why you might want to go to college seem very weak to me:
It’s a strong Chesterton’s fence.
This is an argument for why a fully generic high school student who knows nothing should go to college. It’s not an argument for why it’s good to get a college degree.
Defaults are for what a person with no information should do without thinking. Everyone at 16 has a huge amount of information about themselves, their dreams, their abilities, how they relate to school, how they relate to others, what the contemporaneous world is like. The default is not responsive to any of that. It’s completely inappropriate to be applying some super-general policy about norms and conformity when considering some giant extremely specific high-stakes offer that is only about your own life. This is what I disagree with the most in this dialogue.
General upkeeping of norms/institutions is good.
No it’s not. If it’s not in someone’s self-interest to get a college degree, there’s no way it’s in the social interest for there to be a norm of everyone getting college degrees.
Some people may be totally unproductive and/or be a drain on society (e.g. crime) if they don’t go.
That’s a reason to not be a career criminal, not a reason to get a college degree.
By the way, it’s pretty unproductive to go to college for 4 years while someone else pays for your room, board, and entertainment.
I don’t believe there are a substantial number of people who are incapable of being productive after 12 years of high school, but then if you send them to college for 4 years, now they can be productive. That doesn’t make sense. The way you would train a very low-skill person to be productive is by training them on a specific job, not sending them to college.
Do you believe the result about priming people with a $1500 bill and a $150 bill? That pattern matches perfectly to an infinite list of priming research that failed to replicate, so by default I would assume it is probably wrong.
The one about people scoring better after harvest makes a lot more sense since, like, it’s a real difference and not some priming thing, so I am not as skeptical about that.
It kind of weirds me out that this post has such a high karma score. It’s a fun read, and maybe it will help some Wikipedia admins get their house in order, but I don’t like “we good guys are being wronged by the bad outsider” content on LessWrong. No offense to Trace who is a great writer and clearly worked hard putting all this together.
It seems like this is a place where “controversial” and “taboo” diverge in meaning. The politician would notice that the sentence was about a taboo topic and bounce off, but that’s probably totally unconnected to whether or not it would be controversial among people who know anything about genetics or intelligence and are actually expressing a belief. For example, they would bounce off regardless of whether the number in the sentence was 1%, 50%, or 90%.
I thought the sequels were far better than the first book. But I have seen people with the opposite opinion.
[Question] Karma votes: blind to or accounting for score?
How did you like your trip in the end?
It definitely depends. I think there are lots of people for which there are lots of domains of information for which they are highly trustworthy in realtime conversation. For example, if I am working as a programmer, and I talk to my smart, productive coworker and ask him some normal questions about the system he built recently, I expect him to be highly confident and well calibrated on what he knows. Or if I talk to my friend with a physics PhD and ask him some question like what makes there be friction, I expect him to be highly confident and well calibrated. Certainly he isn’t likely to say something confident and then I look on Wikipedia and he was totally wrong.
In general I take more seriously what people say if
They are a person who has a source of information that could be good about the thing they are saying.
They are a person who is capable of saying they don’t know instead of bullshitting me, when they don’t know. And in general they respect the value of expressing uncertainty.
The thing they are saying could be something that is easier to actually know and understand and remember, instead of super hard. For example, maybe it is part of a kind of consistent gears-level model of some domain, so if they forgot or got it mixed up, they may notice their error.
One and a half years later it seems like AI tools are able to sort of help humans with very rote programming work (e.g. changing or writing code to accomplish a simple goal, implementing versions of things that are well-known to the AI like a textbook algorithm or a browser form to enter data, answering documentation-like questions about a system) but aren’t much help yet on the more skilled labor parts of software engineering.