New York has apparently distributed 35% of the vaccine that it has. Maybe they are focusing on other bottlenecks? Though my naive guess would be that the main problems are that the staff at US agencies are more numerous, less-competent, more regulated, as part of the aging process of any bureaucracy, compounded by the declining prestige of governmental jobs.
RyanCarey
RyanCarey’s Shortform
One alternative would be to try to raise funds (e.g. perhaps from the EA LTF fund) to pay reviewers to perform reviews.
I don’t (and perhaps shouldn’t) have a guaranteed trigger—probably I will learn a lot more about what the trigger should be over the next couple years. But my current picture would be that the following are mostly true:
The AIS field is publishing 3-10x more papers per year as the causal inference field is now.
We have ~3 highly aligned tenured professors at top-10 schools, and ~3 mostly-aligned tenured professors with ~10k citations, who want to be editors of the journal
The number of great papers that can’t get into other top AI journals is >20 per year. I figure it’s currently like ~2.
The chance that some other group creates a similar (worse) journal for safety in the subsequent 3 years is >20%
This idea has been discussed before. Though it’s an important one, so I don’t think it’s a bad thing for us to bring it up again. My perspective now and previously is that this would be fairly bad at the moment, but might be good in a couple of years time.
My background understanding is that the purpose of a conference or journal in this case (and in general) is primarily to certify the quality of some work (and to a lesser extent, the field of inquiry). This in-turn helps with growing the AIS field, and the careers of AIS researchers.
This is only effective if the conference or journal is sufficiently prestigious. Presently, publishing AI safety papers in Neurips, AAAI, JMLR, JAIR serves to certify the validity of the work, and boosts the field of AI safety whereas publishing in (for example) Futures or AGI doesn’t. If you create a new publication venue, by default, its prestige would be comparable to, or less than Futures or AGI, and so wouldn’t really help to serve the role of a journal.
Currently, the flow of AIS papers into the likes of Neurips and AAAI (and probably soon JMLR, JAIR) is rapidly improving. New keywords have been created there at several conferences, along the lines of “AI safety and trustworthiness” (I forget the exact wording) so that you can nowadays expect, on average, to receive reviewer who average out to neutral, or even vaguely sympathetic to AIS research. Ten or so papers were published in such journals in the last year, and all these authors will become reviewers under that keyword when the conference comes around next year. Yes, things like “Logical Inductors” or “AI safety via debate” are very hard to publish. There’s some pressure to write research that’s more “normie”. All of that sucks, but it’s an acceptable cost for being in a high-prestige field. And overall, things are getting easier, fairly quickly.
If you create a too low-prestige journal, you can generate blowback. For example, there was some criticism on Twitter about Pearl’s “Journal of Causal Inference”, even though his field is somewhat more advanced than hours.
In 1.5-3 years time, I think the risk-benefit calculus will probably change. The growth of AIS work (which has been fast) may outpace the virtuous cycle that’s currently happening with AI conferences and journals, such that a lot of great papers are getting rejected. There could be enough tenure-track professors at top schools to make the journal decently high-status (moreso than Futures and AGI). We might even be nearing the point where some unilateral actor will go and make a worse journal if we don’t make one. I’d say when a couple of those things are true, that’s when we should pull the trigger and make this kind of conference/journal.
- Alignment Research = Conceptual Alignment Research + Applied Alignment Research by Aug 30, 2021, 9:13 PM; 37 points) (
- Jan 10, 2021, 9:05 PM; 2 points) 's comment on The Case for a Journal of AI Alignment by (
Yeah, I was thinking something similar. It seems the bottom line is we’ll have to stay at home and receive deliveries for most of the next 4-8 months, while vaccines and infections bring the world toward herd immunity. So as individuals, we should make sure we’re suitably located and supplied for that scenario.
Good year for this portfolio. Any new tips? :P
Follow-on post from mingyuan: Location Discussion Takeaways.
There are three arguments (1) polls underestimating Dems in Southern states, and (2) benchmarking against 2018 senate, and (3) some low-quality Tweets.
It’s weird to hold a lot of stock in (2), given noise from candidate selection and other variables.
If you place a lot of weight on (1), the actually sane bet would be Biden in AZ. It’s rated 2nd and 4th most likely to go dem by Cohn and Wasserman respectively.
Biden for AZ: 77% likely (Economist), priced at 54% on Election Betting Odds.
The Texas bet (TX) seems EV neutral to me, and clearly far worse than the nationwide electoral college (EC) bet.
Biden for EC: 95% likely (The Economist model), priced at 62%
Biden for TX: 26% likely (The Economist), priced at 29%
The two Twitter feeds are full of a lot of shitposting, and don’t update me much.
It bears noting that ads can do good—they can spread important messages. They can encourage people to make purchases that they actually benefit from. And they can help especially with launching new projects that people aren’t yet aware of.
So ideally the advertiser would pay a price for inflicting these negatives, so that we would get the benefits with fewer of the costs.
The same is basically true for any niche interest—it will only be fulfilled where there’s adequate population to justify it. In my case, particular jazz music.
Probably a lot of people have different niche interests like that, even if they can’t agree on one.
He is even more effusive in his essay “cities and ambition” (which incidentally is quite relevant for figuring where rationalists should want to live):
Great cities attract ambitious people. You can sense it when you walk around one. In a hundred subtle ways, the city sends you a message: you could do more; you should try harder. The surprising thing is how different these messages can be. New York tells you, above all: you should make more money. There are other messages too, of course. You should be hipper. You should be better looking. But the clearest message is that you should be richer. What I like about Boston (or rather Cambridge) is that the message there is: you should be smarter. You really should get around to reading all those books you’ve been meaning to.
As of this writing, Cambridge seems to be the intellectual capital of the world. I realize that seems a preposterous claim. What makes it true is that it’s more preposterous to claim about anywhere else. American universities currently seem to be the best, judging from the flow of ambitious students. And what US city has a stronger claim? New York? A fair number of smart people, but diluted by a much larger number of neanderthals in suits. The Bay Area has a lot of smart people too, but again, diluted; there are two great universities, but they’re far apart. Harvard and MIT are practically adjacent by West Coast standards, and they’re surrounded by about 20 other colleges and universities. [1] Cambridge as a result feels like a town whose main industry is ideas, while New York’s is finance and Silicon Valley’s is startups.
When I moved to New York, I was very excited at first. It’s an exciting place. So it took me quite a while to realize I just wasn’t like the people there. I kept searching for the Cambridge of New York. It turned out it was way, way uptown: an hour uptown by air.
Given that the policies are never going to be reverted, maybe better questions would be: which of the policies were the ones that mattered, are any of them political feasible, and if none of them are feasible in the US, then where?
I guess it’s because high-conviction ideologies outperform low-conviction ones, including nationalistic and political ideologies, and religions. Dennett’s Gold Army/Silver Army analogy explains how conviction can build loyatly and strength, but a similar thing is probably true for movement-builders. Also, conviction might make adherents feel better, and therefore simply be more attractive.
It would be nice to draw out this distinction in more detail. One guess:
Uninfluencability seems similar to requiring zero individual treatment effect of D on R.
Riggability (from the paper) would then correspond to zero average treatment effect of D on R
Thanks!
New paper: The Incentives that Shape Behaviour
This is a cool idea. However, are you actually using the subscript in two confusingly different ways? In I_2010, it seems you’re talking about you, indexed to the year 2020, whereas in {Abdul Bey}_2000, it seems you’re citing a book. It would be pretty bad for people to see a bunch of the first kind of case, and then expect citations, but only get them half of the time.
A response from @politicalmath, based on Smallpox: The Death of a Disease by DA Henderson: