Middle Child Phenomenon
Since I’ve seen no one talk about this, I’m coining the phrase ‘Middle Child Phenomenon’.
A law student who entered university four (4) years ago is faced with a curriculum that became completely outdated two (2) years in. Let’s take a cohort of 1000 law students from 2020, and explain from there.
100 students drop out by 2022 for a myriad of reasons; sickness, family, poverty, academic difficulty, etc.
80%/720 students are the nobodies who cruise by, haven’t really become a lifer yet.
What happens to the remaining 180 students?
50%/90 students are the all-rounders, committed enough to do well academically, have time for extracurriculars and do an apprentice at a Big 5 law firm.
The other 50%/90 students are split into two (2) groups: (i) the CV-Chasers, these win competitions, and can recite the constitutional precedents of the last decade, and (ii) the Golden Children who do everything, and they do it damn well.
The Golden Children of around forty-five (45) students talk here and there about artificial intelligence, maybe host a few workshops. They point out a giant, GIANT, GIANT problem with alignment, and it goes something like:
Richard: “Hey, why doesn’t our curriculum have any artificial intelligence electives? Why aren’t there any professors including homework problems, or real-life examples for us to deal with?”
Sally: “As they say: ‘...born too late to sell textbooks, born too early to fix the justice system but born just in time to read the gajillion discovery files Saxmon just gave me. Let’s-s-s-s go-o-o—’
Jake: “Hang on for sec, this is an actual problem. Either we beg our supervisors to let us work with Dr Bellows on that first AI case he’s cooking up, or we have to do our own reading.”
Richard: “And if we don’t do something now—since these Professors who’ve never entered a courtroom in their lives—we’re going to graduate in a completely different world.”
Sally: “Then get replaced by the young hotshots who are blessed with a curriculum that actually gives a damn...I’m getting it now.”
Richard: “It’s a shitfest in the shitfactory on planet shitstorm indeed.”
The point is, forty-five (45) students is 4.5% of every yearly cohort. Even less than that will be good at helping the war effort with AI alignment, and that’s assuming they get hired in a firm with the raw ammunition necessary to institute motion applications that materially change legislation and get parliamentarians to do something.
My conclusion:
Make bridging courses ASAP. Presently, medico-legal courses include doctors, and medical practitioners to familiarise law students with the field in bite-sized pieces.
Programmers and alignment researchers need to get on board with forming expert committees that include the younger generation.
Create a platform like Brilliant.org that’s designed to be simple and inviting for law students and other related fields.
Workshops and expo’s that introduce law students to artificial intelligence in an exciting, and breathtaking way. The kind that gets a fire going, a sparkle in their eyes and makes them say: “Yes, this is what I want to specialise in.”
Start shifting the narrative away from: “LLM’s are destroying education!!!” to “Let’s integrate basic programming skills, and work problems (without corroding critical thinking) into the average law student’s life.”
If you don’t, you’ll get the ‘Middle Child Phenomenon’. A generation of lawyers who, for a period of three (3) - five (5) years of lawyers, either have no interest in AI, or worse, have the wrong ideas on how to go about it.
The long-term consequences of ignoring this sect can be catastrophic. Europe’s AI Act is a clear example. terribly written from a legal standpoint and even worse when it comes to alignment efforts.
It’s time to stop ignoring the middle child in the family, shovel some food onto their plate and see what strength they can muster. I’ll be writing a post on what forms that strength can take, and elaborate more on some initiatives to fix the problem, soon. For now, I wanted to alert the community.
Having read this post, I am still not sure what “the Middle Child Phenomenon” actually is, nor why it’s called that.
The name suggests something rather general. But most of the post seems like maybe the definition is something like “the fact that there isn’t a vigorous effort to get law students informed about artificial intelligence”.
Except that there’s also all the stuff about the distribution of talent and interests among law students, and another thing I don’t understand is what that actually has to do with it. If (as I’m maybe 75% confident) the main point of the post is that it would be valuable to have law students learn something about AI because public policy tends to be strongly influenced by lawyers, then it seems like this point would be equally strong regardless of how your cohort of 1000 lawyers is distributed between dropouts, nobodies, all-rounders, CV-chasers, and “golden children”. (I am deeply unconvinced by this classification, by the way, but I am not a lawyer myself and maybe it’s more accurate than it sounds.)
Alignment researchers are the youngest child, and programmers/Open AI computer scientists are the eldest child. Law students/lawyers are the middle child, pretty simple.
It doesn’t matter whether you use 10,000 students, or 100, the percentage being embarrassingly small remains the same. I’ve simply used the categorisation to illustrate quickly to non-lawyers what the general environment looks like currently.
“golden children” is a parody of the Golden Circle, a running joke that you need to be perfect, God’s gift to earth sort of perfect, to get into a Big 5 law firm in the UK.
Yes , I know what the middle-child phenomenon is in the more literal context. I just don’t have any idea why you’re using the term here. I don’t see any similarities between the oldest / middle / youngest child relationships in a family and whatever relationships there might be between programmers / lawyers / alignment researchers.
(I think maybe all you actually mean is “these people are more important than we’re treating them as”. Might be true, but that isn’t a phenomenon, it’s just a one-off judgement that a particular group of people are being neglected.)
I still don’t understand why the distribution of talent/success/whatever among law students is relevant. If your point is that very few of them are going to be in a position to make a difference to AI policy then surely that actually argues against your main claim that law students should be getting more attention from people who care about AI.