I’ve now sent emails contacting all of the prize-winners.
owencb
The Choice Transition
A brief history of the automated corporation
Winners of the Essay competition on the Automation of Wisdom and Philosophy
AI safety tax dynamics
Safety tax functions
Actually, on 1) I think that these consequentialist reasons are properly just covered by the later sections. That section is about reasons it’s maybe bad to make the One Ring, ~regardless of the later consequences. So it makes sense to emphasise the non-consequentialist reasons.
I think there could still be some consequentialist analogue of those reasons, but they would be more esoteric, maybe something like decision-theoretic, or appealing to how we might want to be treated by future AI systems that gain ascendancy.
Yeah. As well as another consequentialist argument, which is just that it will be bad for other people to be dominated. Somehow the arguments feel less natively consequentialist, and so it seems somehow easier to hold them in these other frames, and then translate them into consequentialist ontology if that’s relevant; but also it would be very reasonable to mention them in the footnote.
My first reaction was that I do mention the downsides. But I realise that that was a bit buried in the text, and I can see that that could be misleading about my overall view. I’ve now edited the second paragraph of the post to be more explicit about this. I appreciate the pushback.
Ha, thanks!
(It was part of the reason. Normally I’d have made the effort to import, but here I felt a bit like maybe it was just slightly funny to post the one-sided thing, which nudged against linking rather than posting; and also I thought I’d take the opportunity to see experimentally whether it seemed to lead to less engagement. But those reasons were not overwhelming, and now that you’ve put the full text here I don’t find myself very tempted to remove it. :) )
The judging process should be complete in the next few days. I expect we’ll write to winners at the end of next week, although it’s possible that will be delayed. A public announcement of the winners is likely to be a few more weeks.
AI, centralization, and the One Ring
I don’t see why (1) says you should be very early. Isn’t the decrease in measure for each individual observer precisely outweighed by their increasing multitudes?
This kind of checks out to me. At least, I agree that it’s evidence against treating quantum computers as primitive that humans, despite living in a quantum world, find classical computers more natural.
I guess I feel more like I’m in a position of ignorance, though, and wouldn’t be shocked to find some argument that quantum has in some other a priori sense a deep naturalness which other niche physics theories lack.
You say that quantum computers are more complex to specify, but is this a function of using a classical computer in the speed prior? I’m wondering if it could somehow be quantum all the way down.
It’s not obvious that open source leads to faster progress. Having high quality open source products reduces the incentives for private investment. I’m not sure in which regimes that will play out that it’s overall accelerationist, but I sort of guess that it will be decelerationist during an intense AI race (where the investments needed to push the frontier out are enormous and significantly profit-motivated).
I like the framework.
Conceptual nit: why do you include inhibitions as a type of incentive? It seems to me more natural to group them with internal motivations than external incentives. (I understand that they sit in the same position in the argument as external incentives, but I guess I’m worried that lumping them together may somehow obscure things.)
The last era of human mistakes
I actually agree with quite a bit of this. (I nearly included a line about pursuing excellence in terms of time allocation, but — it seemed possibly-redundant with some of the other stuff on not making the perfect the enemy of the good, and I couldn’t quickly see how to fit it cleanly into the flow of the post, so I left it and moved on …)
I think it’s important to draw the distinction between perfection and excellence. Broadly speaking, I think people often put too much emphasis on perfection, and often not enough on excellence.
Maybe I shouldn’t have led with the “Anything worth doing, is worth doing right” quote. I do see that it’s closer to perfectionist than excellence-seeking, and I don’t literally agree with it. Though one thing I like about the quote is the corollary: “anything not worth doing right isn’t worth doing” — again something I don’t literally agree with, but something I think captures an important vibe.
I do think people in academia can fail to find the corners they should be cutting. But I also think that they write a lot of papers that (to a first approximation) just don’t matter. I think that academia would be a healthier place if people invested more in asking “what’s the important thing here?” and focusing on that, and not trying to write a paper at all until they thought they could write one with the potential to be excellent.
It’s been a long time since I read those books, but if I’m remembering roughly right: Asimov seems to describe a world where choice is in a finely balanced equilibrium with other forces (I’m inclined to think: implausibly so—if it could manage this level of control at great distances in time, one would think that it could manage to exert more effective control over things at somewhat less distance).