I (kinda) recognized it because my partner (who actually did the recognizing) uses it to study finite-sized microbial ecology models. In their case they add an additional “cavity” species to an existing community and solve for self consistency. I’m very excited to see the superposition work.
That does sound like an impressive success. I’m curious to which policies you got them to change.
(DMed my top recommendation, someone who used to have pretty bad OCD, helped resolve someone else’s and mostly their own, and is full time doing x-risk reduction work)
Did your confusion last beyond the first paragraph (or the sixth, if you count the introductory note)
> “Naturalism” is a label for a conceptual framework, investigatory discipline, and semi-formalized way of looking at and learning about the world. I’ve been developing and teaching naturalism for the past couple of years, if you start counting on the day I chose the term, or since 2013, if you take a more historical perspective.
?
It’s not exactly the same of course but Yudkowsky has been predicting that ASIs would be able to effectively hack people’s minds for a really long time.
I think @Eli Tyre kinda got it from one example in 2023. (second comment)
Mostly because it doesn’t work? Because the analogies you’re assuming between big stocks and small decisions don’t apply? Big stocks have billions of dollars traded and reporting/auditing requirements backed by courts and banks and governments. Try drawing a line from how Dow stocks behave to how penny stocks behave, and then extrapolate way past there.
Hey, sorry for the late reply. Quick summary:
There were more vaguely concerning developments, but certainly nothing like “this is it, we’ve cracked sota using [X] online learning architecture”. Overall, my updates are towards LLMs + long context being sufficient for pretty dangerous capabilities, but with a decent chunk of room left for online learning to do a sudden leap basically out of nowhere (mostly because its so cheap to train one of these models).
It’s not exactly the same, but Zvi has strongly argued against persuasion risk being taken out of the OpenAI preparedness framework.
Why don’t EA/rationalist firms use prediction markets for deciding who to hire (futarchy style)? Your application to a company would be something like ticking a box that says, “I give you permission to create a prediction market on me that I won’t trade in.” The statement the market would be trading could be P(we won’t regret hiring X | we hire X) or P(X will meet the following KPIs | we hire X), where X is the candidate. Effectively, this would outsource the work of hiring to people who want to profit on the market.
You could also make candidates do some set of tests or competitions and show that information to traders (as well as resume and other standard info). Then you pick the top trading candidates you want (or several above some threshold). This would also give more opportunity to some candidate X to publicly do things they believe will convince traders to increase their market’s price. It would be really cool if the mechanism actually deciding hiring decisions had the capacity to look at everything a candidate was willing to do/show to get hired, vs the current model of very restricted time commitment per candidate.
I suspect the blocker to this is that the trading volume would just be too low to give meaningful information.
some more variations on this theme:
Nick Land in Meltdown: “Nothing human makes it out of the near-future.”, “Capital only retains anthropological characteristics as a symptom of underdevelopment; reformatting primate behaviour as inertia to be dissipated in self-reinforcing artificiality. Man is something for it to overcome: a problem, drag.”
Historical materialism views the organization of society throughout history as being the argmax of production (or maybe argmax the development of production or productive power or something), and after AGI, humans will not be part of the argmax of production for long.
“when you make something less useful (eg by introducing other things that can do its “jobs/functions” better), you make it less likely to stick around”, “what is no longer good for anything tends to get discarded” [1]
“messy futures are bad for humans” (in the limit: “a uniformly random configuration of atoms doesn’t have anything like humans in it”)
- ↩︎
conversely, you can make something more likely to be preserved by figuring out how to make it instrumental to more valued/productive/competitive things/processes — each such process then provides a reason to keep the thing around, and provides a constraint on any replacement to the thing. “instrumentalizing the terminal”, ie protecting good things this way, is a sort of dual to subgoal stomp. i think protection by instrumentality is the main way one gets conserved structures in biological evolution
And still Herbie’s unblinking eyes stared into hers and their dull red seemed to expand into dimly-shining nightmarish globes.
He was speaking, and she felt the cold glass pressing against her lips. She swallowed and shuddered into a certain awareness of her surroundings.
Still Herbie spoke, and there was an agitation in his voice—as if he were hurt and frightened and pleading.
The words were beginning to make sense. “This is a dream,” he was saying, “and you mustn’t believe in it. You’ll wake into the real world soon and laugh at yourself. He loves you, I tell you. He does, he does! But not here! Not now! This is all illusion.”
Susan Calvin nodded, her voice a whisper, “Yes! Yes!” She was clutching Herbie’s arm, clinging to it, repeating over and over, “It isn’t true, is it? It isn’t, is it?”
Just how she came to her senses, she never knew—but it was like passing from a world of misty unreality to one of harsh sunlight. She pushed him away from her, pushed hard against that steely arm, and her eyes were wide.
“What are you trying to do?” Her voice rose to a harsh scream. “What are you trying to do?”
Herbie backed away, “I want to help.”
The psychologist stared, “Help? By telling me this is a dream? By trying to push me into schizophrenia?” A hysterical tenseness seized her, “This is no dream! I wish it were!”
When I travel to Vienna by train, sometimes I remember that I am crossing a line that in my childhood was guarded by soldiers ordered to kill everyone who tried to escape the socialist paradise.
You picked the wrong stories from I, Robot! “Liar!” is a great match.
I think The Whispering Earring is a fundamentally different thing—its broad message is “automation will atrophy the skills that make us human”, which is a pretty common message in sci-fi, and distinct from “isolation from human feedback will remove a necessary check on our worst impulses”, which I think is what OP was asking about.
Reason, as far as I know, is about robots attributing religious significance to their designated function. I don’t think it fits either. It’s an interesting take on aligning superficially human-like AI, though.
Robbie is closer, in that it broaches the idea of isolation from human interaction, but Mrs. Weston is portrayed as being in the wrong for disrupting a genuine friendship between her daughter and the robot.
To answer OP’s question, The Veldt is the closest thing that comes immediately to mind. Children raised by a machine-nursery become obsessed with the instant gratification it provides, and develop dangerously uncanny behavior as a result.
I can’t speak for other PGT companies, but I know Herasight will do genetic testing on multiple prospective egg or sperm donors for clients.
I don’t know of a single egg or sperm bank that does this kind of in depth testing by default. They don’t even offer basic polygenic scores, let alone expanded carrier screening.
You can’t screen egg directly because there’s no way to read the genome without destroying it, and eggs only have one copy.
You could in theory reconstruct the donor’s genome if you had enough of her eggs, but that would require buying and fertilizing a bunch of them.
It’s easiest to just directly ask the donor to get genetically sequenced.
Are there clinics which do full genetic testing on their egg donors?
Personally from a gay-man-considering-surrogacy perspective it seems like that is the biggest potential uplift. But from what I see on e.g. CNY’s website they do that only for diseases and not the whole genome. I guess if the clinics tell us the phenotype (e.g. are they college educated) then that would help.
Or is it possible to screen donor eggs directly? (Maybe expensive).
Anthropic are rather explicitly attempting Claude to not just compliantly do what it’s told, but to say no or redirect you, when necessary/appropriate, They are steering for the minimal viable corrigibility, not maximal corrigibility. I don’t think an ASI with Claude’s moral sensibilities would happily “write code which jailbreaks other LLMs and enables them to do dangerous ML research”. Whether that’s Superintelligence Alignment is a matter of opinion, but it’s not just product Alignment. (Apparently too explicitly not for the Department of War’s liking.)
In Reason the religious robot at one point starts to convince the human engineers that maybe the religious robot is right, but in the end the human engineers hold onto to their priors that humans created robots.