Hi, I am a Physicist, an Effective Altruist and AI Safety student/researcher.
Linda Linsefors
12th-13th
* 18 total applications
* 2 (11%) Stop/Pause AI
* 7 (39%) Mech-Interp and Agent Foundations
15th-16th
* 45 total application
* 4 (9%) Stop/Pause AI
* 20 (44%) Mech-Interp and Agent Foundations
All applications
* 370 total
* 33 (12%) Stop/Pause AI
* 123 (46%) Mech-Interp and Agent Foundations
Looking at the above data, is directionally correct for you hypothesis, but it doesn’t look statisically significant to me. The numbers are pretty small, so could be a fluke.
So I decided to add some more data
10th-11th
* 20 total applications
* 4 (20%) Stop/Pause AI
* 8 (40%) Mech-Interp and Agent Foundations
Looking at all of it, it looks like Stop/Pause AI are coming in at a stable rate, while Mech-Interp and Agent Foundations are going up a lot after the 14th.
AI Safety interest is growing in Africa.
AISC 25 (out of 370) applicants from Africa, with 9 from Kenya and 8 from Nigeria.
Numbers for all countries (people with multiple locations not included)
AISC applicants per country—Google SheetsThe rest looks more or less in-line with what I would expect.
Sounds plausible.
> This would predict that the ratio of technical:less-technical applications would increase in the final few days.
If you want to operationalise this in terms on project first choice, I can check.
Side note:
If you don’t tell what time the application deadline is, lots of people will assume its anywhere-on-Earth, i.e. noon the next day in GMT.When I was new to organising I did not think of this, and kind of forgot about time zones. I noticed that I got a steady stream of “late” applications, that suddenly ended at 1pm (I was in GMT+1), and didn’t know why.
Every time I have an application form for some event, the pattern is always the same. Steady trickle of applications, and then a doubling on the last day.
And for some reason it still surprises me how accurate this model is. The trickle can be a bit uneven, but the doubling the last day is usually close to spot on.
This means that by the time I have a good estimate of what the average number of applications per day is, then I can predict what the final number will be. This is very useful, for knowing if I need to advertise more or not.
For the upcoming AISC, the trickle was a late skewed, which meant that an early estimate had me at around 200 applicants, but the final number of on-time application is 356. I think this is because we where a bit slow at advertising early on, but Remmelt made a good job sending out reminders towards the end.
Application deadline was Nov 17.
At midnight GMT before Nov 17 we had 172 application.
At noon GMT Nov 18 (end of Nov 17 anywhere-on-Earth) we had 356 applicationThe doubling rule predicted 344, which is only 3% off
Yes, I count the last 36 hours as “the last day”. This is not cheating since that’s what I always done (approximately [1]), since starting to observe this pattern. It’s the natural thing to do when you live at or close to GMT, or at least if your brain works like mine.- ^
I’ve always used my local midnight as the divider. Sometimes that has been Central European Time, and sometimes there is daylight saving time. But it’s all pretty close.
- ^
If people are ashamed to vote for Trump, why would they let their neighbours know?
Linda Linsefors of the Center for Applied Rationality
Hi, thanks for the mention.
But I’d like to point out that I never worked for CFAR in any capacity. I have attended two CFAR workshops. I think that calling me “of the Center for Applied Rationality” is very misleading, and I’d prefer it if you remove that part, or possibly re-phrase it.
You can find their prefeed contact info in each document in the Team section.
Yes there are, sort of...
You can apply to as many projects as you want, but you can only join one team.
The reasons for this is: When we’ve let people join more than one team in the past, they usually end up not having time for both and dropping out of one of the projects.
What this actually means:
When you join a team you’re making a promise to spend 10 or more hours per week on that project. When we say you’re only allowed to join one team, what we’re saying is that you’re only allowed to make this promise to one project.
However, you are allowed to help out other teams with their projects, even if you’re not officially on the team.
@Samuel Nellessen
Thanks for answering Gunnars question.But also, I’m a bit nervous that posting their email here directly in the comments is too public, i.e. easy for spam-bots to find.
If the research lead want to be contactable, their contact info is in their projekt document, under the “Team” section. Most (or all, I’m not sure) research leads have some contact info.
AI Safety Camp 10
The way I understand it the homunculus is part of self. So if you put the wanting in the homunculus, it’s also inside self. I don’t know about you, but my self concept has more than wanting. To be fair, he homunculus concept is also a bit richer than wanting (I think?) but less encompassing than the full self (I think?).
Based on Steve’s response to one of my comments, I’m now less sure.
Reading this post is so strange. I’ve already read the draft, so it’s not even new to me, but still very strange.
I do not recognise this homunculus concept you describe.
Other people reading this, do you experience yourself like that? Do you resonate with the intuitive homunculus concept as described in the post?
I my self have a unified self (mostly). But that’s more or less where the similarity ends.
For example when I read:in my mind, I think of goals as somehow “inside” the homunculus. In some respects, my body feels like “a thing that the homunculus operates”, like that little alien-in-the-head picture at the top of the post,
my intuitive reaction is astonishment. Like, no-one really think of themselves like that, right? It’s obviously just a metaphor, right?
But that was just my first reaction. I know enough about human mind variety to absolutely believe that Steve has this experience, even though it’s very strange to me.
Similarly, as Johnstone points out above, for most of history, people didn’t know that the brain thinks thoughts! But they were forming homunculus concepts just like us.
Why do you assume they where forming homunculus concepts? Since it’s not veridical, they might have a very different self model.
I’m from the same culture as you and I claim I don’t have homunculus concept, or at least not one that matches what you describe in this post.
I don’t think what Steve is calling “the homonculus” is the same as the self.
Actually he says so:The homunculus, as I use the term, is specifically the vitalistic-force-carrying part of a broader notion of “self”
It’s part of the self model but not all of it.
(Neuroscientists obviously don’t use the term “homunculus”, but when they talk about “top-down versus bottom-up”, I think they’re usually equating “top-down” with “caused by the homunculus” and “bottom-up” with “not caused by the homunculus”.)
I agree that the homunculus-theory is wrong and bad, but I still think there is something to top-down vs bottom-up.
It’s related to what you write laterAnother part of the answer is that positive-valence S(X) unlocks a far more powerful kind of brainstorming / planning, where attention-control is part of the strategy space. I’ll get into that more in Post 8.
I think conscious control (aka top-down) is related to conscious thoughts (in the global work space theory sense) which is related to using working memory, to unlock more serial compute.
That said, if those sorts of concepts are natural in our world, then it’s kinda weird that human minds weren’t already evolved to leverage them.
A counter possibility to this that comes to mind:
There might be concepts that is natural in our world, but which are only useful for a mind with much more working memory, or other compute recourses than the human mind.
If weather simulations use concepts that are confusing and un-intuitive for most humans, this would be evidence for something like this. Weather is something that we encounter a lot, and is important for humans, especially historically. If we haven’t developed some natural weather concept, it’s not for lack of exposure or lack of selection pressure, but for some other reason. That other reason could be that we’re not smart enough to use the concept.
Same data but in cronlogical order
10th-11th
* 20 total applications
* 4 (20%) Stop/Pause AI
* 8 (40%) Mech-Interp and Agent Foundations
12th-13th
* 18 total applications
* 2 (11%) Stop/Pause AI
* 7 (39%) Mech-Interp and Agent Foundations
15th-16th
* 45 total application
* 4 (9%) Stop/Pause AI
* 20 (44%) Mech-Interp and Agent Foundations
Stop/Puase AI stays at 2-4 per week, while the others go from 7-8 to 20
One may point out that 2 to 4 is a doubling suggesting noisy data, and also going from 7-8 is also just a doubling and might not mean much. This could be the case. But we should expect higher notice for lower numbers. I.e. a doubling of 2 is less surprising than a (more than) doubling of 7-8.