Programmer, rationalist, chess player, father, altruist.
cata
I just skimmed your downvoted post and linked doc. (I agree that there was no way I would have clicked through to the doc outside the context of this question.)
The post read like a big series of platitudes, or applause lights. The claims were too generic to be interesting to me. I agree with some, I don’t agree with others, but either way it wasn’t giving me anything I couldn’t generate myself.
The linked doc actually started out strong. You say that you have personally experienced how your own behavior and thinking change when you are materially deprived, and that you actually tested different kinds of deprivations and rewards on yourself over time, and observed patterns. That’s very interesting! I don’t know anything about that. I want to hear what you experienced and think about whether it has anything to do with my life and what I can see. I would upvote a post about that.
I think you’re writing these things to try to pitch your project, but people on LW mostly aren’t sitting around wanting to get pitched on projects. They want to read intellectually stimulating new ideas. And it’s not a convincing pitch either unless you show people you have the goods.
I don’t have anything special to say about this, but this direction excites me, so I am leaving a comment. Thanks for running it and writing up these thought-provoking ideas.
Looks like LLM content.
There’s another blogger, Nathan Tankus, who is also reporting accounts directly from his sources within the BFS. He wears his bias on his sleeve and goes wild with the hyperbole, but he is a prolific public intellectual of some sort so he may be accurately reporting the basic facts. He also did an interview on Odd Lots but it didn’t really have anything new.
Correct me if I’m wrong but this looks substantially LLM-written.
I would be surprised if it were ethically important for you to donate that much. LW has made a pretty big difference to my life (e.g. my career, marriage, and a big chunk of my bank account are causally downstream of LW existing) and I estimated that there are probably something like $100m dollars worth of people for whom it was similarly impactful as me, and then a long tail of more people for whom it was somewhat less impactful, so I owed on the order of 1% of my net worth, such that if everyone like me who saw this fundraiser did the same then it would have enough money to thrive.
So unless LW was really important to you or unless you are sure that you will be a millionaire in the future and you are just donating in advance, I don’t think you owe $1000. But if you want to donate it, then do.
Thanks for this elaboration. One reason I would be more hopeful than in the case of private airplanes (less so potable water) is that it seems like, while providing me a private airplane may mostly only benefit me and my family by making my life more leisurely, providing me or my children genetic enhancement may be very socially productive, at least improving our productivity and making us consume less healthcare resources. So it would seem possible to end up with an arrangement where it’s socially financed and the surplus is shared.
It’s interesting that you describe humans as remaining “equal in the biological lottery”. Of course, to the humans, when the lottery is decided before they are born, and they are given only one life to live, it doesn’t feel very equal when some of them win it and others lose. It’s not obvious to me that inequality based on who spends money to enhance themselves or their family’s biology is worse than inequality based on random chance. It seems like effects on social cohesion or group conflict may result either way regardless of the source of the inequality.
Do you have any suggestions for how genetic enhancement technology could hypothetically be developed in a better way so that the majority is not left behind? Or in your view would it be best for it to never be developed at all?
Can you elaborate on why you think that genetic modification is more prone to creating inequality than other kinds of technology? You mentioned religious reasons in your original comment. Are there other reasons? On priors, I might expect it to follow a typical cost curve where it gets cheaper and more accessible over time, and where the most valuable modifications are subsidized for some people who can’t afford them.
To me, since LessWrong has a smart community that attracts people with high standards and integrity, by default if you (a median LW commenter) write your considered opinion about something, I take that very seriously and assume that it’s much, much more likely to be useful than an LLM’s opinion.
So if you post a comment that looks like an LLM wrote it, and you don’t explain which parts were the LLM’s opinion and which parts were your opinion, then that makes it difficult to use it. And if there’s a norm of posting comments that are partly unmarked LLM opinions, that means that I have to adopt the very large burden of evaluating every comment to try to figure out whether it’s an LLM, in order to figure out if I should take it seriously.
I have been to lots of conferences at lots of kinds of conference centers and Lighthaven seems very unusual:
The space has been extensively and well designed to be comfortable and well suited to the activities.
The food/drink/snack situation is dramatically superior.
The on-site accommodations are extremely convenient.
I think it’s great that rationalist conferences have this extremely attractive space to use that actively makes people want to come, rather than if they were in like, a random hotel or office campus.
As for LW, I would say something sort of similar:
The website and feature set is now dramatically superior to e.g. Discourse or PHPBB.
It’s operated by people who spend lots of time trying to figure out new adjustments that make it better, including ones that nobody else is doing, like splitting out karma and agree voting, and cultivating the best old posts.
Partially as a result, the quality of the discussion is basically off the charts for a free general-interest public forum.
In both cases it seems like I don’t see other groups trying to max out the quality level in these ways, and my best guess for why is that there is no other group who is equally capable, has a similarly strong vision of what would be a good thing to create, and wants to spend the effort to do it.
I would think to approach this by figuring something like the Shapley value of the involved parties, by answering the questions “for a given amount of funding, how many people would have been willing to provide this funding if necessary” and “given an amount of funding, how many people would have been willing and able to do the work of the Lightcone crew to produce similar output.”
I don’t know much about how Lightcone operates, but my instinct is that the people are difficult to replace, because I don’t see many other very similar projects to Lighthaven and LW, and that the funding seems somewhat replaceable (for example I would be willing to donate much more than I actually did if I thought there should be less other money available.) So probably the employees should be getting the majority of the credit.
- Dec 15, 2024, 4:09 PM; 1 point) 's comment on Understanding Shapley Values with Venn Diagrams by (
I was going to email but I assume others will want to know also so I’ll just ask here. What is the best way to donate an amount big enough that it’s stupid to pay a Stripe fee, e.g. $10k? Do you accept donations of appreciated assets like stock or cryptocurrency?
But as a secondary point, I think today’s models can already use bash tools reasonably well.
Perhaps that’s true, I haven’t seen a lot of examples of them trying. I did see Buck’s anecdote which was a good illustration of doing a simple task competently (finding the IP address of an unknown machine on the local network).
I don’t work in AI so maybe I don’t know what parts of R&D might be most difficult for current SOTA models. But based on the fact that large-scale LLMs are sort of a new field that hasn’t had that much labor applied to it yet, I would have guessed that a model which could basically just do mundane stuff and read research papers, could spend a shitload of money and FLOPS to run a lot of obviously informative experiments that nobody else has properly run, and polish a bunch of stuff that nobody else has properly polished.
I’m not confident but I am avoiding working on these tools because I think that “scaffolding overhang” in this field may well be most of the gap towards superintelligent autonomous agents.
If you imagine a o1-level entity with “perfect scaffolding”, i.e. it can get any info on a computer into its context whenever it wants, and it can choose to invoke any computer functionality that a human could invoke, and it can store and retrieve knowledge for itself at will, and its training includes the use of those functionalities, it’s not completely clear to me that it wouldn’t already be able to do a slow self-improvement takeoff by itself, although the cost might be currently practically prohibitive.
I don’t think building that scaffolding is a trivial task at all, though.
I don’t have a bunch of citations but I spend time in multiple rationalist social spaces and it seems to me that I would in fact be excluded from many of them if I stuck to sex-based pronouns, because as stated above there are many trans people in the community, of whom many hold to the consensus progressive norms on this. The EA Forum policy is not unrepresentative of the typical sentiment.
So I don’t agree that the statements are misleading.
(I note that my typical habit is to use singular they for visibly NB/trans people, and I am not excluded for that. So it’s not precisely a kind of compelled speech.)
I was playing this bot lately myself and one thing it made me wonder is, how much better would it be at beating me if it was trained against a model of me in particular, rather than how it actually was trained? I feel I have no idea.
2 data points: I have 15-20 years of experience at a variety of companies but no college and no FANG, currently semi-retired. Recruiters still spam me with many offers and my professional network wants to hire me at their small companies.
A friend of mine has ~2 years of experience as a web dev and some experience as a mechanical engineer + random personal projects, no college, and he worked hard to look for a software job and found absolutely nothing, with most companies never contacting him after an application.
One and a half years later it seems like AI tools are able to sort of help humans with very rote programming work (e.g. changing or writing code to accomplish a simple goal, implementing versions of things that are well-known to the AI like a textbook algorithm or a browser form to enter data, answering documentation-like questions about a system) but aren’t much help yet on the more skilled labor parts of software engineering.
It seems like Musk in 2018 dramatically underestimated the ability of OpenAI to compete with Google in the medium term.
Great stuff, I was quite surprised that current models can solve this now. I predicted they would not.
I appreciate your urge to put the edit at the top but I think it’s better to move it to the bottom, so it doesn’t spoil people trying to guess whether the models can solve it.