Hi, I am a Physicist, an Effective Altruist and AI Safety student/researcher.
Linda Linsefors
AI Safety Outreach Seminar & Social (online)
In standard form, a natural latent is always approximately a deterministic function of . Specifically: .
What does the arrow mean in this expression?
I think the qualitive difference is not as large as you think it is. But I also don’t think this is very crux-y for anything, so I will not try to figure out how to translate my reasoning to words, sorry.
I guess the modern equivalent that’s relevant to AI alignment would be Singular Learning Theory which proposes a novel theory to explain how deep learning generalizes.
I think Singular Learning Theory was developed independently of deep learning, and is not specifically about deep learning. It’s about any learning system, under some assumptions, which are more general than the assumptions for normal Learning Theory. This is why you can use SLT but not normal Learning Theory when analysing NNs. NNs break the assumptions for normal Learning Theory but not for SLT.
Ok, in that case I want to give you this post as inspiration.
Changing the world through slack & hobbies — LessWrong
That’s still pretty good. Most reading lists are not updated at all after publication.
Is it an option to keep your current job but but spend your research hours on AI Safety instead of quarks? Is this something that would appealing to you + acceptable to your employer?
Given the current AI safety funding situation, I would strongly reccomend not giving up your current income.
I think that a lot of the pressure towards street light research comes from the funding situation. The grants are short and to stay in the game you need visible results quickly.
I think MATS could be good, if you can treat it as exploration, but not so good if you’re in a hurry to get a job or a grant directly afterwards. Since MATS is 3 months of full time, it might not fit into your schedule (without quitting your job). Maybe instead try SPAR. Or see here for more options.
Or you can skip the training program route, and just start reading on your own. There’s lots and lots of AI safety reding lists. I reccomend this one for you. @Lucius Bushnaq who created and maintains it, also did quark physics, before switching to AI Safety. But if you don’t like it, there are more options here under “Self studies”.
In general, the funding situation in AI safety is pretty shit right now, but other than that, there are so many resources to help people get started. It’s just a matter of choosing where to start.
Einstein did his pre-paradigmatic work largely alone. Better collaboration might’ve sped it up.
I think this is false. As I remember hearing the story, he where corresponding with several people via letters.
The aesthetics have even been considered carefully, although oddly this has not extended to dress (as far as I have seen).
I remember there being some dress instructions/suggestions for last years Bay solstice. I think we where told to dress in black, blue and gold.
Funding Case: AI Safety Camp 11
I’m not surprised by this observation. In my experience rationalists also have more than base-rate of all sorts of gender non-conformity, including non-binary and trans people. And the trends are even stronger in AI Safety.
I think the explanation is:
High tolerans for this type of non-conformity
High autism which corelates with these things
- Relative to the rest of the population, people in this community prioritize other things (writing, thinking about existential risk, working on cool projects perhaps) over routine chores (getting a haircut)
I think that this is almost the correct explanation. We prioritise other things (writing, thinking about existential risk, working on cool projects perhaps) over caring about weather someone else got a haircut.
What it’s like to organise AISC
About once or twice per week this time of year someone emails me to ask:
Please let me break rule X
My response:
No you’re not allowed to break rule X. But here’s a loop hole that lets you do the thing you want without technically breaking the rule. Be warned that I think using the loophole is a bad idea, but if you still want to, we will not stop you.
Because not leaving the loophole would be too restrictive for other reason, and I’m not going to not tell people all their options.
The fact that this puts the responsibility back on them is a bonus feature I really like. Our participants are adults, and are allowed to make their own mistakes. But also, sometimes it’s not a mistake, because there is no set of rules for all occasion, and I don’t have all the context of their personal situation.
Quote from the AI voiced podcast version of this post.
Such a lab, separated by more than 1 Australian Dollar from Earth, might provide sufficient protection for very dangerous experiments.
London rationalish meetup @ Arkhipov
Same data but in cronlogical order
10th-11th
* 20 total applications
* 4 (20%) Stop/Pause AI
* 8 (40%) Mech-Interp and Agent Foundations
12th-13th
* 18 total applications
* 2 (11%) Stop/Pause AI
* 7 (39%) Mech-Interp and Agent Foundations
15th-16th
* 45 total application
* 4 (9%) Stop/Pause AI
* 20 (44%) Mech-Interp and Agent Foundations
Stop/Puase AI stays at 2-4 per week, while the others go from 7-8 to 20
One may point out that 2 to 4 is a doubling suggesting noisy data, and also going from 7-8 is also just a doubling and might not mean much. This could be the case. But we should expect higher notice for lower numbers. I.e. a doubling of 2 is less surprising than a (more than) doubling of 7-8.
12th-13th
* 18 total applications
* 2 (11%) Stop/Pause AI
* 7 (39%) Mech-Interp and Agent Foundations
15th-16th
* 45 total application
* 4 (9%) Stop/Pause AI
* 20 (44%) Mech-Interp and Agent Foundations
All applications
* 370 total
* 33 (12%) Stop/Pause AI
* 123 (46%) Mech-Interp and Agent Foundations
Looking at the above data, is directionally correct for you hypothesis, but it doesn’t look statisically significant to me. The numbers are pretty small, so could be a fluke.
So I decided to add some more data
10th-11th
* 20 total applications
* 4 (20%) Stop/Pause AI
* 8 (40%) Mech-Interp and Agent Foundations
Looking at all of it, it looks like Stop/Pause AI are coming in at a stable rate, while Mech-Interp and Agent Foundations are going up a lot after the 14th.
AI Safety interest is growing in Africa.
AISC 25 (out of 370) applicants from Africa, with 9 from Kenya and 8 from Nigeria.
Numbers for all countries (people with multiple locations not included)
AISC applicants per country—Google SheetsThe rest looks more or less in-line with what I would expect.
Sounds plausible.
> This would predict that the ratio of technical:less-technical applications would increase in the final few days.
If you want to operationalise this in terms on project first choice, I can check.
Side note:
If you don’t tell what time the application deadline is, lots of people will assume its anywhere-on-Earth, i.e. noon the next day in GMT.When I was new to organising I did not think of this, and kind of forgot about time zones. I noticed that I got a steady stream of “late” applications, that suddenly ended at 1pm (I was in GMT+1), and didn’t know why.
This looks like a typo?
Did you just mean “CS”?