Views my own, not my employers.
nc
telic/partial-niche evodevo
This really clicked for me. I don’t blame you for making up the term because, although I can see the theory and examples of papers in that topic, I can’t think of a unifying term that isn’t horrendously broad (e.g. molecular ecology).
I am surprised that you find theoretical physics research less tight funding-wise than AI alignment [is this because the paths to funding in physics are well-worn, rather than better resourced?].
This whole post was a little discouraging. I hope that the research community can find a way forward.
I do think it’s conceptually nicer to donate to PauseAI now rather than rely on the investment appreciating enough to offset the time-delay in donation. Not that it’s necessarily the wrong thing to do, but it injects a lot more uncertainty into the model that is difficult to quantify.
The fight for human flourishing doesn’t end at the initiation of takeoff [echo many points from Seth Herd here]. More generally, it’s very possible to win the fight and lose the war, and a broader base of people who are invested in AI issues will improve the situation.
(I also don’t think this is an accurate simplification of the climate movement or its successes/failures. But that’s tangential to the point I’d like to make.)
I think PauseAI would be more effective if it could mobilise people who aren’t currently associated with AI safety, but from what I can see it largely draws from the same base as EA. It is important to involve as wide a section of society as possible in the x-risk conversation and activism could help achieve this.
The most likely scenario by far is that a mirrored bacteria would be outcompeted by other bacteria and killed by achiral defenses due to [examples of ecological factors]
I think this is the crux of the different feelings around this paper. There are a lot of unknowns here. The paper does a good job of acknowledging this and (imo) it justifies a precautionary approach, but I think the breadth of uncertainty is difficult to communicate in e.g. policy briefs or newspaper articles.
It’s a good connection to draw—I wonder if increased awareness about AI is sparking increased awareness of safety concepts in related fields. It’s a particularly good sign for awareness and action of the safety concepts present in the overlap between AI and biotechnology.
I think you’re right that there’s very little benefit compared to the risks for mirror life which is not seen as true with AI—on top of the general truth that biotech is harder to monetise.
Can you explain more about why you think [AGI requires] a shared feature of mammals and not, say, humans or other particular species?
It’s very field-dependent. In ecology & evolution, advisor-student fit is very influential and most programmes are direct admit to a certain professor. The weighting seems different for CS programs, many of which make you choose an advisor after admission (my knowledge is weaker here).
In the UK it’s more funding dependent—grant-funded PhDs are almost entirely dependent on the advisor’s opinion, whereas DTPs/CDTs have different selection criteria and are (imo) more grades-focused.
From discussing AI politics with the general public [i.e. not experts], it seems that the public perception of AI progress is bifurcating on two parallel lines:
A) Current AI progress is sudden and warrants a response (either acceleration or regulation)
B) Current AI progress is a flash-in-the-pan or a nothingburger.
(This is independent from responding to hypothetical AI-in-concept.)
These perspectives are largely factual rather than ideological. In conversation, the active tension between these two incompatible perspectives is really obvious. It makes it hard to hold meaningful conversations without being overbearing or ?accusatory.
Where does this divide come from? Is it the image hangover from the public’s interaction with the first ChatGPT? How can we bridge this when speaking to the average person?
It is worth noting that UKRI is in the process of changing their language to Doctoral Landscape Awards (replacing DTP) and Doctoral Focal Awards (CDT). The announcements for BBSRC and NERC have already been done, but I can’t find what EPSRC is doing.
I agree that evolutionary arguments are frequently confused and oversimplified, but your argument is proving too much.
[the difference between] AI and genetic code is that genetic code has way less ability to error-correct than basically all AI code, and it’s in a weird spot of reliability where random mutations are frequent enough to drive evolution, but not so frequent as to cause organisms to outright collapse within seconds or minutes.
This “weird spot of reliability” is itself an evolved trait, and even with the effects of mutation rate variation between species, the variation within populations is heavily constrained (see Lewontin’s paradox of diversity). Even discounting purely genetic/code-based(?) factors, the amount of plasticity (?search) in behaviour is also an evolvable trait (see canalisation) - I think it’s likely there are already terms for this within the AI field but it’s not obvious to me how best to link the two ideas together. I’m more curious about the value drift evolutionary arguments but I don’t see an a priori reason that these ideas don’t apply.
It would be good if we could understand the conditions under which greater plasticity/evolvability is selected for, and whether we expect its effects to occur in a timeframe relevant to near-term alignment/safety.
Another reason is that effective AI architectures can’t go through simulated evolution, since that would use up too much compute for training to work (We forget that evolution had at a lower bound 10e46 FLOPs to 10e48 FLOPs to get to humans).
It’s not obvious to me that this is a sharp lower-bound, particularly when AI are already receiving the benefits of prior human computation in the form of culture. Human evolution had to achieve the hard part of reifying the world into semantic objects whereas AI has a major head-start. If language is the key idea (as some have argued), then I think there’s a decent chance that the lower bound is smaller than this.
There’s a connection to the idea of irony poisoning here, and I do not think it is good for the person in question to pretend to hold extremist views. This is a parallel issue with the fact that it’s terrible optics and creates a difficult tension with this website’s newfound interest in doing communications/policy/outreach work.
Currently I’m not convinced that the memetic analogy has done more to clarify than to occlude cultural evolution/opinion dynamics. That’s not to say that work in genetics is useless, but I think that the terminology has taken precedence above what the actual concepts mean, and I read a lot of conversations that feel like people just trading the information that they read The Selfish Gene 40 years ago.
There’s certainly scope for an applied “memetics” but it’s really crying out for a good predictive (even if simplistic) model.
I’ve noticed they perform much better on graduate-level ecology/evolution questions (in a qualitative sense—they provide answers that are more ‘full’ as well as technically accurate). I think translating that into a “usefulness” metric is always going to be difficult though.
I would have found it helpful in your report for there to be a ROSES-type diagram or other flowchart showing the steps in your paper collation. This would bring it closer in line with other scoping reviews and would have made it easier to understand your methodology.
Linguistic Drift, Neuralese, and Steganography
In this section you use these terms implying there’s a body of research underneath these terms. I’m very interested in understanding this behaviour but I wasn’t aware it was being measured. Is anyone currently working on models of linguistic drift/measuring it with manuscripts you could link?
My impression is that’s a little simplistic, but I also don’t have the best knowledge of the market outside WGS/WES and related tools. That particular market is a bloodbath. Maybe there’s better scope in proteomics/metabolomics/stuff I know nothing about.
My impression is that much of this style of innovation is happening inside research institutes and then diffusing outward. There are plenty of people doing “boring” infrastructure work at the Sanger Institute, EMBL-EBI, etc. And you all get it for free! I can however see that on-demand services for biotech are a little different.
I wonder if this is why major governments pushed mandatory open access around 2022-2023. In the UK, all public-funded research is now required to be open access. I think the coverage is different in the US.
How big of this is an issue in practice? For AI in particular, considering that so much contemporary research is published on arxiv, it must be relatively accessible?