RSS

Seth Herd

Karma: 4,799

Message me here or at seth dot herd at gmail dot com.

I was a researcher in cognitive psychology and cognitive neuroscience for about two decades; I studied complex human thought. Now I’m applying what I’ve learned to the study of AI alignment.

Research overview:

Alignment is the study of how to give AIs goals or values aligned with ours, so we’re not in competition with our own creations. Recent breakthroughs in AI like ChatGPT make it possible we’ll have smarter-than-human AIs soon. So we’d better get ready. If their goals don’t align well enough with ours, they’ll probably outsmart us and get their way, possibly much to our chagrin. See this excellent intro video for more.

There are good and deep reasons to think that aligning AI will be very hard. But I think we have promising solutions that bypass most of those difficulties, and could be relatively easy to use for the types of AGI we’re most likely to develop first.

That doesn’t mean I think building AGI is safe. Humans often screw up complex projects, particularly on the first try, and we won’t get many tries. If it were up to me I’d Shut It All Down, but I don’t see how we could actually accomplish that for all of humanity. So I focus on finding alignment solutions.

In brief I think we can probably build and align language model agents (or language model cognitive architectures) even when they’re more autonomous and competent than humans. We’d use a stacking suite of alignment methods that can mostly or entirely avoid using RL for alignment, and achieve corrigibility (human-in-the-loop error correction) by having a central goal of following instructions. This scenario leaves multiple humans in charge of ASIs, creating some dangerous dynamics, but those problems might be navigated, too.

Bio

I did computational cognitive neuroscience research from getting my PhD in 2006 until the end of 2022. I’ve worked on computational theories of vision, executive function, episodic memory, and decision-making, using neural network models of brain function. I’ve focused on the emergent interactions that are needed to explain complex thought. Here’s a list of my publications.

I was increasingly concerned with AGI applications of the research, and reluctant to publish my full theories lest they be used to accelerate AI progress. I’m incredibly excited to now be working directly on alignment, currently as a research fellow at the Astera Institute.

More on approach

The field of AGI alignment is “pre-paradigmatic.” So I spend a lot of my time thinking about what problems need to be solved, and how we should go about solving them. Solving the wrong problems seems like a waste of time we can’t afford.

When LLMs suddenly started looking intelligent and useful, I noted that applying cognitive neuroscience ideas to them might well enable them to reach AGI and soon ASI levels. Current LLMs are like humans with no episodic memory for their experiences, and very little executive function for planning and goal-directed self-control. Adding those cognitive systems to LLMs can make them into cognitive architectures with all of humans’ cognitive capacities—a “real” artificial general intelligence that will soon be able to outsmart humans.

My work since then has convinced me that we could probably also align such an AGI so that it stays aligned even if it grows much smarter than we are. Instead of trying to give it a definition of ethics it can’t misunderstand or re-interpret (value alignment mis-specification), we’ll do the obvious thing: design it to follow instructions. It’s counter-intuitive to imagine an intelligent entity that wants nothing more than to follow instructions, but there’s no logical reason this can’t be done. An instruction-following proto-AGI can be instructed to act as a helpful collaborator in keeping it aligned as it grows smarter.

I increasingly suspect we should be actively working to build such intelligences. It seems like our our best hope of survival, since I don’t see how we can convince the whole world to pause AGI efforts, and other routes to AGI seem much harder to align since they won’t “think” in English. Thus far, I haven’t been able to engage enough careful critique of my ideas to know if this is wishful thinking, so I haven’t embarked on actually helping develop language model cognitive architectures.

Even though these approaches are pretty straightforward, they’d have to be implemented carefully. Humans often get things wrong on their first try at a complex project. So my p(doom) estimate of our long-term survival as a species is in the 50% range, too complex to call. That’s despite having a pretty good mix of relevant knowledge and having spent a lot of time working through various scenarios. So I think anyone with a very high or very low estimate is overestimating their certainty.

Cur­rent At­ti­tudes Toward AI Provide Lit­tle Data Rele­vant to At­ti­tudes Toward AGI

Seth Herd12 Nov 2024 18:23 UTC
15 points
2 comments4 min readLW link

In­tent al­ign­ment as a step­ping-stone to value alignment

Seth Herd5 Nov 2024 20:43 UTC
32 points
4 comments3 min readLW link

“Real AGI”

Seth Herd13 Sep 2024 14:13 UTC
18 points
20 comments3 min readLW link

Con­flat­ing value al­ign­ment and in­tent al­ign­ment is caus­ing confusion

Seth Herd5 Sep 2024 16:39 UTC
46 points
18 comments5 min readLW link

If we solve al­ign­ment, do we die any­way?

Seth Herd23 Aug 2024 13:13 UTC
70 points
102 comments4 min readLW link

Hu­man­ity isn’t re­motely longter­mist, so ar­gu­ments for AGI x-risk should fo­cus on the near term

Seth Herd12 Aug 2024 18:10 UTC
46 points
10 comments1 min readLW link

Fear of cen­tral­ized power vs. fear of mis­al­igned AGI: Vi­talik Bu­terin on 80,000 Hours

Seth Herd5 Aug 2024 15:38 UTC
65 points
22 comments5 min readLW link

[Question] What’s a bet­ter term now that “AGI” is too vague?

Seth Herd28 May 2024 18:02 UTC
15 points
9 comments2 min readLW link

An­thropic an­nounces in­ter­pretabil­ity ad­vances. How much does this ad­vance al­ign­ment?

Seth Herd21 May 2024 22:30 UTC
49 points
4 comments3 min readLW link
(www.anthropic.com)

In­struc­tion-fol­low­ing AGI is eas­ier and more likely than value al­igned AGI

Seth Herd15 May 2024 19:38 UTC
70 points
25 comments12 min readLW link

Goals se­lected from learned knowl­edge: an al­ter­na­tive to RL alignment

Seth Herd15 Jan 2024 21:52 UTC
41 points
17 comments7 min readLW link

After Align­ment — Dialogue be­tween RogerDear­naley and Seth Herd

2 Dec 2023 6:03 UTC
15 points
2 comments25 min readLW link

Cor­rigi­bil­ity or DWIM is an at­trac­tive pri­mary goal for AGI

Seth Herd25 Nov 2023 19:37 UTC
16 points
4 comments1 min readLW link

Sapi­ence, un­der­stand­ing, and “AGI”

Seth Herd24 Nov 2023 15:13 UTC
15 points
3 comments6 min readLW link

Alt­man re­turns as OpenAI CEO with new board

Seth Herd22 Nov 2023 16:04 UTC
6 points
3 comments1 min readLW link

OpenAI Staff (in­clud­ing Sutskever) Threaten to Quit Un­less Board Resigns

Seth Herd20 Nov 2023 14:20 UTC
52 points
28 comments1 min readLW link
(www.wired.com)

We have promis­ing al­ign­ment plans with low taxes

Seth Herd10 Nov 2023 18:51 UTC
33 points
9 comments5 min readLW link

Seth Herd’s Shortform

Seth Herd10 Nov 2023 6:52 UTC
6 points
40 comments1 min readLW link

Shane Legg in­ter­view on alignment

Seth Herd28 Oct 2023 19:28 UTC
66 points
20 comments2 min readLW link
(www.youtube.com)

The (par­tial) fal­lacy of dumb superintelligence

Seth Herd18 Oct 2023 21:25 UTC
36 points
5 comments4 min readLW link