Good question :) Honestly, it’s so that we can get an early low-committment expression of interest to give an idea of size, for venue search and to take to potential funders. Without a deadline, I don’t know how to get people to actually apply to things like this. Don’t tell anyone, but I expect that some people who apply after the deadline will be accepted to join the retreat.
There’s another question, which is “why are you planning this so far in advance?” And the answer there is “visas can take a really long time to process, especially when Africa is involved”.
Sam F. Brown
The multiple buildings made it feel like a complex to me, but I’ve changed the wording to simply “hotel”.
Yes, I’m now questioning my memory, but the rack-rate was on the inside of the RGI room I was staying in. I forget the room number, but feel free to DM me if you’d like a description of which one it was.
The book, in case anyone is wondering, is The Art of Community by Charles H. Vogl, and is very good. I’m grateful to the CEA.
I actually think that catering of high enough quality that people don’t leave the permises for meals is a very efficient use of money. And there’s a good argument to be made that the most efficient use of money isn’t the most effective one.
But also, thanks :)
Thanks for being open about your response, I appreciate it and I expect many people share your reaction.
I’ve edited the section about the hotel room price/purchase, where people have pointed out I may have been incorrect or misleading,
This definitely wasn’t meant to be a hit piece, or misleading “EA bad” rhetoric.On the point of “What does a prospective AI x-safety researcher think when they get referred to this site and see this post above several alignment research posts?”—I think this is a large segment of my intended audience. I would like people to know what they’re getting themselves in for, so they can make an informed decision.
I think that a lot of the point of this post is to explore and share the dissonance between what “thinks” right, and what “feels” right. The title of the piece was intended to make it clear that this is about an emotional, non-rational reaction. It’s styled more as a piece of journalism than as a scientific paper, because I think that that’s the best way to communicate the emotional reaction which is the main focus of the piece.
I appreciate the encouragement, and I do still agree with my decision to attempt a 6 month exploration to see whether I can do meaningful alignment work.
I don’t really know what the point is either. I think I’m just trying to share how I feel.
Thanks @habryka—I’ve edited the post to make it clearer that it’s hearsay and that the purchase is not complete. If you think “hotel complex” is a misleading description for the RGI I’d happily consider an alternative term.
Thanks @lincolnquirk—It’s almost certainly a price that no one pays, and I’ve edited the post to make that clearer, but it did still shock me.
I’m happy for it to be cross-posted there, but I’m not sure how to do that myself. If anyone else wants to, feel free. (Edit: I’m confused by the downvote. Is this advising against cross-posting? Or suggesting that I should work out how to and then do it myself?)
Here’s an example of someone prompting with a walkthrough of a similar token-aware approach to successfully guide GPT-3:
https://twitter.com/npew/status/1525900849888866307
This might be unwelcome nit-picking, but I find it kind of jarring to read “meta is Greek for above, mesa is Greek for below.” That’s not quite right, μετα is more like ‘after’ in “turn right after the bridge” and μεσα is more like ‘within’ (μεσο is like ‘middle’, as in ‘mesoscale’). Above/below could be something like άνω/κάτω (like anode/cathode).
I think the meta/mesa has nice symmetry, and the name is now well-known, but maybe this particular sentence could be made less wrong :p
Also the bibliography link #7 for “What is the opposite of meta?” seems broken for me.
Hi there,
I ran this event a while ago, and would like to claim it for Oxford Rationalish https://www.lesswrong.com/groups/wQA8BE5e8mETeWb8A, rather than the (inactive) university society.
Current schedule just dropped!
Start Time Title Organiser 10:00 Tea & Hello Sam Brown 10:30 Speed Friending Sam Brown & Patrick Wilson 11:20 10-min break
11:30 Double-Crux: collaborative disagreement Sam Brown 12:20 10-min break
12:30 Alexander Technique and Awareness Lulie Tanett 13:30 Lunch Chris Ardarne 14:30 Acoustic Sing-along Patrick Wilson 15:00 Heroes, Role models, and Imagination David Leon 15:50 10-min break
16:00 Hamming Questions Sam Brown 16:20 10-min break
16:30 Hamming Circles Sam Brown 17:50 10-min break
18:00 Focusing Damon Sasi 19:00 Dinner Chris Ardarne 20:30 Metta Meditation Mrinank Sharma 21:30 Tea & Chill
This group seems to be inactive. There’s another group, Oxford Rationalish, which is currently active (Jan 2022), we aim to have at-least-monthly pub meetups, Circling, and occasional Applied-Rationality workshops.
https://www.lesswrong.com/groups/wQA8BE5e8mETeWb8A
https://www.facebook.com/groups/1221768638031684/
This is a great question, and one that I think everyone should be asking themselves and each other. It would be very easy for these things to devolve into an aimless free-for-all, which wouldn’t be great.
I think you’re probably the best judge about whether you’d get value from coming. But, to give you a personal example, at the Global retreat a) I realised why I ran a meetup at all, b) my goals became much more ambitious, and c) I’ve doubled down on putting effort into making my group succeed. I’ve since started a regular applied-rationality dojo, which may-or-may-not have happened without the inspiration of seeing others’ success. My group is growing, and attendee balance is improving. Also, I’ve found it very useful to have the support of an international community of rat-ty organisers.