Autonomous Systems @ UK AI Safety Institute (AISI)
DPhil AI Safety @ Oxford (Hertford college, CS dept, AIMS CDT)
Former senior data scientist and software engineer + SERI MATS
I’m particularly interested in sustainable collaboration and the long-term future of value. I’d love to contribute to a safer and more prosperous future with AI! Always interested in discussions about axiology, x-risks, s-risks.
I enjoy meeting new perspectives and growing my understanding of the world and the people in it. I also love to read—let me know your suggestions! In no particular order, here are some I’ve enjoyed recently
Ord—The Precipice
Pearl—The Book of Why
Bostrom—Superintelligence
McCall Smith—The No. 1 Ladies’ Detective Agency (and series)
Melville—Moby-Dick
Abelson & Sussman—Structure and Interpretation of Computer Programs
Stross—Accelerando
Graeme—The Rosie Project (and trilogy)
Cooperative gaming is a relatively recent but fruitful interest for me. Here are some of my favourites
Hanabi (can’t recommend enough; try it out!)
Pandemic (ironic at time of writing...)
Dungeons and Dragons (I DM a bit and it keeps me on my creative toes)
Overcooked (my partner and I enjoy the foody themes and frantic realtime coordination playing this)
People who’ve got to know me only recently are sometimes surprised to learn that I’m a pretty handy trumpeter and hornist.
Ah, yep yep. (Though humans do in fact learn (with losses) from other copies, and within a given firm/lab/community the transfer is probably quite high.)
Hmm, I think we have basically the same model, with maybe a bit different parameters (about which we’re both somewhat uncertain). But think that readers of the OP without some of that common model might be misled.
From a lot of convos and writing, I infer that many people conflate a lot of aspects of intelligence into one thing. With that view, ‘intelligence explosion’ is just a dynamic where a single variable, ‘the intelligence’ (or ‘the algorithmic quality’), gets real big real fast. And then of course, because intelligence is the thing that gets you new technology, you can get all the new technology.
About this, you said,
revealing that you correctly distinguish different factors of intelligence like ‘sample efficiency’ and ‘chemistry knowledge’ (I think I already knew this but it’s good to have local confirmation), and that you don’t think a software-only IE yields all of them.
Regarding the second sentence, it could be a misleading use of terms to call that ‘generalisation’[1], but I agree that ‘sample efficiency’ is among the relevant aspects of intelligence (and is a candidate for one that could be mostly generalisably built up automated in silico), and a relevant complement is ‘chemistry (frontier research) experience’, and that a lot of each taken together may effectively get you chemistry research taste in addition (which can yield new chemistry knowledge if applied with suitable experimental apparatus).
I’m emphasising[2] (in my exploration post, in this thread, and the sister comment about frontier taste depreciation) that there’s practically a wide gulf between ‘hoover taste up from web data’ and ‘robotics or humans-with-headsets’, in two ways. The first tops out somewhere (probably at sub-frontier) due to the depreciation of frontier research taste. The second cluster doesn’t top out anywhere, but is slower and has more barriers to getting started. Is a year to exceed humanity’s peak taste bold in most domains? Not sure! If a lot of in silico is possible, maybe it’s doable. That might include cyber and software (and maybe narrow particular areas of chem/bio where simulation is especially good).
If you know for sure that those other bottlenecks proceed super fast, you don’t need to necessarily clarify what intelligence factors you’re talking about for practical purposes, but if you’re not sure, I think it’s worth being super clear about it where possible.
I might instead prefer terminology like ‘quickly implies … (on assumptions...)’
Incidentally, the other thing I’m emphasising (but I think you’d agree?) is that on this view, R&Ds are always substantially driven by experimental throughput, with ‘sample efficiency (of the combined workforce) at accruing research taste’ being the main other rate-determining factor (because the steady state of research taste depends on this, and exploration quality * experimental throughput is progress). Throwing more labour at it can make your serial experimentation a bit faster, and can parallelise experimentation (with some parallelism discount), with presumably very diminishing returns. Throwing smarter labour (as in, better sample efficiency, and maybe faster-thinking, with diminishing returns), can increase the rate, by getting more insight per experiment and choosing better experiments.