Thanks! I’ve actually seen some more recent posts that got pretty popular outlining this same argument, so I guess I’m… happy… that it’s gaining some traction? However happy one can be to see the same prophecy of doom repeated and validated by other people who are just as unlikely to change the current trajectory of the world as me.
dr_s
Possibly perfectionism? I experience this form of creative paralysis a lot—as soon as I get enough into the weeds of one creative form I start seeing the endless ramifications of the tiniest decision and basically can just not move a step without trying to achieve endlessly deep optimisation over the whole. Meanwhile people who can just not give a fuck and let the creative juices flow get shit done.
I think that’s a bit too extreme. Are all machines bad? No, obviously better to have mechanised agriculture than be all peasants. But he is grasping something here which we are now dealing with more directly. It’s the classic Moloch trap of “if you have enough power to optimise hard enough then all slack is destroyed and eventually life itself”. If you thought that was an inevitable end of all technological development (and we haven’t proven it isn’t yet), you may end up thinking being peasants is better too.
I think some believe it’s downright impossible and others that we’ll just never create it because we have no use for something so smart it overrides our orders and wishes. That at most we’ll make a sort of magical genie still bound by us expressing our wishes.
I feel like this is a bit incorrect. There are imaginable things that are smarter than humans at some tasks, smart as average humans at others, thus overall superhuman, yet controllable and therefore possible to integrate in an economy without immediately exploding into an utopian (or dystopian) singularity. The question is whether we are liable to build such things before we build the exploding singularity kind, or if the latter is in some sense easier to build and thus stumble upon first. Most AI optimists think these limited and controllable intelligences are the default natural outcome of our current trajectory and thus expect mere boosts in productivity.
I don’t know about the Bible itself, but there’s a long and storied tradition of self mortification and denial of corporeity in general in medieval Christian doctrine and mysticism. If we want to be cute we could call that fandom, but after a couple thousand years of it it ends up being as important as the canon text itself.
I think the fundamental problem is that yes, there are people with that innate tendency, but that is not in the slightest bit helped by creating huge incentives for a whole industry to put its massive resources into finding ways to make that tendency become as bad as possible. Imagine if we had entire companies that somehow profited from depressed people committing suicide and had dedicated teams of behavioural scientists and quants crunching data and designing new strategies to make anyone who already has the tendency maximally suicidal. I doubt we would consider that fine, right? Sports betting (really, most addiction-based industries) is like that. The problem isn’t just providing the activity, as some kind of relief valve. The problem is putting behind the activity a board of investors that wants to maximise profits and turns it into a full blown Torment Nexus. Capitalism is a terrible way of providing a service when the service is “self-inflicted misery”.
I definitely think this is a general cultural zeitgeist thing. The progressive thing used to be the positivist “science triumphs over all, humanity rises over petty differences, leaves childish things like religions, nations and races behind and achieves its full potential”. But then people have grown sceptical of all grand narratives, seeing them as inherently poisoned because if you worry about grand things you are more inclined to disregard the small ones. Politics built around reclamation of personal identity, community, tradition as forms of resistance against the rising tide of globalising capitalism have taken over the left. Suddenly being an atheist was not cool any more, it was arrogant and possibly somewhat racist. And wanting to colonise space reeked of white man’s burden even if there probably aren’t many indigenous people to displace up there. So everything moved inwards, and the writers followed that trend.
This is exactly the kind of thing Egan is reacting to, though—starry-eyed sci-fi enthusiasts assuming LLMs are digital people because they talk, rather than thinking soberly about the technology qua technology.
I feel like this borders on the strawman. When discussing this argument my general position isn’t “LLMs are people!”. It’s “Ok, let’s say LLMs aren’t people, which is also my gut feeling. Given that they still converse as or more intelligently as some human beings whom we totally acknowledge as people, where the fuck does that leave us as to our ability to discern people-ness objectively? Because I sure as hell don’t know and envy your confidence that must surely be grounded in a solid theory of self-awareness I can only dream of”.
And then people respond with some mangled pseudoscientific wording for “God does not give machines souls”.
I feel like my position is quite common (and is, for example, Eliezer’s too). The problem isn’t whether LLMs are people. It’s that if we can simply handwave away LLMs as obviously and self evidently not being people then we can probably keep doing that right up to when the Blade Runner replicants are crying about it being time to die, which is obviously just a simulation of emotion, don’t be daft. We have no criterion or barrier other than our own hubris, and that is famously not terribly reliable.
Since Chat GPT came out I feel like Egan really lost the plot on that one, already when discussing on Twitter. It felt like a combination of rejection of the “bitter lesson” (understandable: I too find inelegant and downright offensive to my aesthetic sense that brute force deep learning seems to work better than elegantly designed GOFAI, but whatever it is, it does undeniably work ), and political cognitive dissonance that says that if people who wrongthink support AI, and evil billionaires throw their weight behind AI, therefore AI is bad, and therefore it must be a worthless scam, because it’s important to believe it is (this of course can to some extent work if you persuade the investors of it; but in the end it’s mostly a hopeless effort when all you have is angry philosophical rambling and all they have is a freaking magical computer program that speaks to you. I know which one is going to impress people more).
So basically, yeah, I understand the reasons to be annoyed, disgusted, scared and offended by reality. But it is reality, and I think Egan is in denial of it, which seems to have resulted in a novel.
That sounds more like my intuition, though obviously there still have to be differences given that we keep using self-attention (quadratic in N) instead of MLPs (linear in N).
In the limit of infinite scaling, the fact that MLPs are universal function approximators is a guarantee that you can do anything with them. But obviously we still would rather have something that can actually work with less-than-infinite amounts of compute.
Interesting. But CNNs were developed originally for a reason to begin with, and MLP-mixer does mention a rather specific architecture as well as “modern regularization techniques”. I’d say all of that counts as baking in some inductive biases in the model though I agree it’s a very light touch.
Does it make sense to say there is no inductuive bias at work in modern ML models? Seems that clearly literally brute force searching ALL THE ALGORITHMS would still be unfeasible no matter how much compute you throw at it. Our models are very general, but when e.g. we use a diffusion model for images that exploits (and is biased towards) the kind of local structure we expect of images, when we use a transformer for text that exploits (and is biased towards) the kind of sequential pair-correlation you see in natural language, etc.
Generalize this story across a whole field, and we end up with most of the field focused on things which are easy, regardless of whether those things are valuable.
I would say this problem plagues more than just alignment, it plagues all of science. Trying to do everything as a series of individual uncoordinated contributions with an authority on top acting only to filter based on approximate performance metrics has this effect.
On this issue specifically, I feel like the bar for what counts as an actually sane and non-dysfunctional organization to the average user of this website is probably way too lofty for 95% of workplaces out there (to be generous!) so it’s not even that strange that it would be the case.
A whole lot of people, the vast majority that I’ve talked to, can easily answer this—“because they pay me and I’m not sure anyone else will”, with a bit of “I know this mediocracy well, and the effort to learn a new one only to find it’s not better will drain what little energy I have left”.
Or “last time I did that I ended up in this one which is even worse than the previous, so I do not wish to tempt fate again”.
Not just that, but as per manga spoilers:
The US already have a bunch of revived people going, including a Senku-level rationalist and scientist who has discovered the revival fluid in parallel and is in fact much less inclined to be forgiving and wants the exact opposite of Tsukasa, to take advantage of the hard reset to build a full technocracy. By the time Senku & co arrive there, they already have automatic firearms and WW1-era planes. So essentially Tsukasa’s plan was always absolutely doomed. Just like it has happened before, one day backwards isolationist Japan would wake up to find US gunships with superior firepower at its gates and it would be able to do nothing at all to stop them.
It’s not about science as a whole, but Assassination Classroom features one of the most beautiful uses of actual, genuine, 100% correctly represented math in fiction I’ve ever seen.
Spoilers:
During one of the exams, Karma is competing against the Principal’s son for top score. One of the problems involves calculating the volume of the Wigner-Seitz cell in a body-centered cubic lattice. This is obviously quite hard for middle schoolers, but believable for an exam whose explicit purpose was to test them to their limits and let the very best rise to the top. The Principal’s son tries to brute force the problem by decomposing the shape into a series of pyramids—doable, but very tedious. Meanwhile Karma realizes that it’s as simple as noticing that all atoms are equivalent and must have the same volume, and therefore there’s a simple and beautiful symmetry argument for why the volume is exactly 1⁄2 of the cubic unit cell. Which doubles as a metaphor for how everyone has their talents and domain they excel in—a realization Karma reaches thanks to his character growth. Absolutely top notch writing stuff.
Senku definitely holds that position, and of the authors I wouldn’t be surprised if Boichi at least did—he is famously a big lover of classic science fiction. If you check out his Dr. Stone: Byakuya solo spinoff manga, it starts out as a simple side story, showing the life of Senku’s dad and his astronauts companions in space, and then spirals out in a completely insane direction involving essentially an AI singularity (understandably, it’s not canon).
There is a certain “Jump heroes shouldn’t kill wantonly” vibe I guess but truth be told Jump heroes have gotten significantly more willing to dirty their hands recently (now Kagurabachi seems set to become the next big thing, and Chihiro has a body count in the dozens at this point). So I don’t think editorial fiat explains this either.
It’s really part of the manga’s fantasy, as in, realistically, sure, Tsukasa would have been killed or kept in stone. But just like everyone is able to make up really complicated fully functioning devices with rudimentary means, everyone is able to reach Aumann Agreement within a relatively short time of being proven wrong. That’s just how the world rolls.
I mean, the problem of “my brain gets bad vibes too easily” is more general. Prejudice is a very common manifestation of it, but it’s something that can happen in other ways, and in the limit, as mentioned, you get bad vibes from everyone because you’re just paranoid and it isolates you. I think this is more an issue of you trying to get a sense of how good your intuition is in the first place, and possibly examine it to move those intuitive vibes to the conscious level. Like for example there are certain patterns in speech and attitude that scream “fake” to me, but it feels like I could at least try describing them.