I operate by Crocker’s rules.
niplav
Trying to organize a festival probably isn’t risky. It doesn’t seem like it’d involve too much time or money.
I don’t think that’s true. I’ve co-organized one one weekend-long retreat in a small hostel for ~50 people, and the cost was ~$5k. Me & the co-organizers probably spent ~50h in total on organizing the event, as volunteers.
That’s unfortunate that you are less likely to come, and I’m glad to get the feedback. I could primarily reply with reasons why I think it was the right call (e.g. helpful for getting the event off the ground, helpful for pinpointing the sort of ideas+writing the event is celebrating, I think it’s prosocial for me to be open about info like this generally, etc) but I don’t think that engages with the fact that it left you personally less likely to come. I still overall think if the event sounds like a good time to you (e.g. interesting conversations with people you’d like to talk to and/or exciting activities) and it’s worth the cost to you then I hope you come :-)
Maybe to clarify my comment: I was merely describing my (non-endorsed[1]) observed emotional content wrt the festival, and my intention with the comment was not to wag my finger at you guys in the manner of “you didn’t invite me”.
I wonder whether other people have a similar emotional reaction.
I appreciate Lightcone being open with the information around free invitations though! I think I’d have bought a ticket anyway if I had time around that weekend, and I think I’d probably have a blast if I would attend.
Btw: What’s the chance of a 2nd LessOnline?
- ↩︎
I think my reaction is super bound up in icky status-grabbing/status-desiring/inner-ring-infiltrating parts of my psyche which I’m not happy with.
- ↩︎
Oops, you’re correct about the typo and also about how this doesn’t restrict belief change to Brownian motion. Fixing the typo.
Putting the festival at the same time as EAG London is unfortunate.
Giving out “over 100 free tickets” induces (in me) a reaction of “If I’m not invited I’m not going to buy a ticket”. This is perhaps because I hope/wish to slide into those 100 slots, even though it’s unrealistic. I believe other events solve this by just giving a list of a bunch of confirmed attendees, and being silent about giving out free tickets to those.
Because[1] for a Bayesian reasoner, there is conservation of expected evidence.
Although I’ve seen it mentioned that technically the change in the belief on a Bayesian should follow a Martingale, and Brownian motion is a martingale.
- ↩︎
I’m not super technically strong on this particular part of the math. Intuitively it could be that in a bounded reasoner which can only evaluate programs in , any pattern in its beliefs that can be described by an algorithm in is detected and the predicted future belief from that pattern is incorporated into current beliefs. On the other hand, any pattern described by an algorithm in can’t be in the class of hypotheses of the agent, including hypotheses about its own beliefs, so patterns persist.
- ↩︎
Thank you a lot for this. I think this or @Thomas Kwas comment would make an excellent original-sequences-style post—it doesn’t need to be long, but just going through an example and talking about the assumptions would be really valuable for applied rationality.
After all, it’s about how much one should expect ones beliefs to vary, which is pretty important.
Thank you a lot! Strong upvoted.
I was wondering a while ago whether Bayesianism says anything about how much my probabilities are “allowed” to oscillate around—I was noticing that my probability of doom was often moving by 5% in the span of 1-3 weeks, though I guess this was mainly due to logical uncertainty and not empirical uncertainty.
Since there are 10 5% steps between 50% and 0 or 1, and for ~10 years, I should expect to make these kinds of updates ~100 times, or 10 times a year, or a little bit less than once a month, right? So I’m currently updating “too much”.
Thanks, that makes sense.
Someone strong-downvoted a post/question of mine with a downvote strength of 10, if I remember correctly.
I had initially just planned to keep silent about this, because that’s their good right to do, if they think the post is bad or harmful.
But since the downvote, I can’t shake off the curiosity of why that person disliked my post so strongly—I’m willing to pay $20 for two/three paragraphs of explanation by the person why they downvoted it.
The standard way of dealing with this:
Quantify how much worse the PRC getting AGI would be than OpenAI getting it, or the US government, and how much existential risk there is from not pausing/pausing, or from the PRC/OpenAI/the US government building AGI first, and then calculating whether pausing to do {alignment research, diplomacy, sabotage, espionage} is higher expected value than moving ahead.
(Is China getting AGI first half the value of the US getting it first, or 10%, or 90%?)
The discussion over pause or competition around AGI has been lacking this so far. Maybe I should write such an analysis.
Gentlemen, calculemus!
I realized I hadn’t given feedback on the actual results of the recommendation algorithm. Rating the recommendations I’ve gotten (from −10 to 10, 10 is best):
The obsessive autists who have spent 10,000 hours researching the topic and writing boring articles in support of the mainstream position are left ignored.
It seems like you’re spanning up three different categories of thinkers: Academics, public intellectuals, and “obsessive autists”.
Notice that the examples you give overlap in those categories: Hanson and Caplan are academics (professors!), while the Natália Mendonça is not an academic, but is approaching being a public intellectual by now(?). Similarly, Scott Alexander strikes me as being in the “public intellectual” bucket much more than any other bucket.
So your conclusion, as far as I read the article, should be “read obsessive autists” instead of “read obsessive autists that support the mainstream view”. This is my current best guess—”obsessive autists” are usually not under much strong pressure to say politically palatable things, very unlike professors.
My best guess is that people in these categories were ones that were high in some other trait, e.g. patience, which allowed them to collect datasets or make careful experiments for quite a while, thus enabling others to make great discoveries.
I’m thinking for example of Tycho Brahe, who is best known for 15 years of careful astronomical observation & data collection, or Gregor Mendel’s 7-year-long experiments on peas. Same for Dmitry Belayev and fox domestication. Of course I don’t know their cognitive scores, but those don’t seem like a bottleneck in their work.
So the recipe to me looks like “find an unexplored data source that requires long-term observation to bear fruit, but would yield a lot of insight if studied closely, then investigate”.
I think the Diesel engine would’ve taken 10 years or 20 years longer to be invented: From the Wikipedia article it sounds like it was fairly unintuitive to the people at the time.
A core value of LessWrong is to be timeless and not news-driven.
I do really like the simplicity and predictability of the Hacker News algorithm. More karma means more visibility, older means less visibility.
Our current goal is to produce a recommendations feed that both makes people feel like they’re keeping up to date with what’s new (something many people care about) and also suggest great reads from across LessWrong’s entire archive.
I hope that we can avoid getting swallowed by Shoggoth for now by putting a lot of thought into our optimization targets
(Emphasis mine.)
Here’s an idea[1] for a straightforward(?) recommendation algorithm: Quantilize over all past LessWrong posts by using inflation-adjusted karma as a metric of quality.
The advantage is that this is dogfooding on some pretty robust theory. I think this isn’t super compute-intensive, since the only thing one has to do is to compute the cumulative distribution function once a day (associating it with the post), and then inverse transform sampling from the CDF.
Recommending this way has the disadvantage of not being recency-favoring (which I personally like), and not personalized (which I also like).
By default, it also excludes posts below a certain karma threshold. That could be solved by exponentially tilting the distribution instead of cutting it off (, otherwise to be determined (experimentally?)). Such a recommendation algorithm wouldn’t be as robust against very strong optimizers, but since we have some idea what high-karma LessWrong posts look like (& we’re not dealing with a superintelligent adversary… yet), that shouldn’t be a problem.
- ↩︎
If I was more virtuous, I’d write a pull request instead of a comment.
- ↩︎
There are several sequences which are visible on the profiles of their authors, but haven’t yet been added to the library. Those are:
Why Everyone (Else) Is a Hypocrite: Evolution and the Modular Mind (Kaj Sotala)
The Sense Of Physical Necessity: A Naturalism Demo (LoganStrohl)
Scheming AIs: Will AIs fake alignment during training in order to get power? (Joe Carlsmith)
I think these are good enough to be moved into the library.
Back then I didn’t try to get the hostel to sign the metaphorical assurance contract with me, maybe that’d work. A good dominant assurance contract website might work as well.
I guess if you go camping together then conferences are pretty scalable, and if I was to organize another event I’d probably try to first message a few people to get a minimal number of attendees together. After all, the spectrum between an extended party and a festival/conference is fluid.