Euripides was famously more “progressive” than Aeschylus, to the point of getting mocked about it by the more conservative Aristophanes IIRC. Athens had its own politics (lots of it!) and while history may not be an exact circle, sometimes it rhymes pretty hard.
dr_s
That’s also the thing, myths like these are often canvases that get riffed on for various purposes. It’s hardly a new thing, their own cultures were doing it already from the get go. “Myths” aren’t singular monolithic things, they’re pieces of culture that can be rearranged in many ways.
Maybe it’s just like modern soap operas or reality shows, and the point is just schadenfreude. “Man I am sure happy I’m not any one of those horrible people”.
BlueSky user Tommaso Sciortino points out that part of what we’re witnessing is a cultural shift away from people fixating on religious texts during mental health episodes to fixating on LLMs. I can only speculate on what’s causing this, but if I had to guess it has a lot to do with AI eschatology going mainstream (both positive and negative). In the AI the psychotic finds both a confidant and a living avatar of an eventual higher power. They can bring their paranoid concerns to this impossible entity that seems (at a glance, if one doesn’t inspect too deeply) to know everything. As I will discuss later in most of the cases I’m familiar with the ontological vertigo of a machine expressing what seems to be human emotions is a key component of the breakdown or near breakdown.
I think it’s also in general that, to riff on a famous Bruce Lee quip, Bibles don’t talk back.
With things like casava I wonder how the SA people got onto the effect in the first place, if it’s so weak and hard to disentangle. We have trouble doing that sort of thing now with meta-reviews and double blind studies!
I think it’s fair that while not “eternal” some stories and myths hold concepts about very basic facts of human psychology that keep being relevant because they’re just emergent from our biology and/or basic game theory. We still talk about “sour grapes” to refer to someone simply deciding to disguise their need to settle for less than they can achieve with disdain—that dates back to Aesop’s fables.
Also some of these are bundles of multiple things at once. I think for example the myth of Iphigenia’s sacrifice isn’t quite as straightforward—Agamemnon overall as a man does not come off as pious and righteous, he is arrogant and prideful instead. It’s hard to fully parse what a Greek’s opinion of that story would have been overall, but Agamemnon is also the same guy who pisses off Achilles with his greed (which fair, was about a slave… but neither man was exactly an abolitionist here), and eventually gets murked by his wife in cahoots with her lover because of that sacrifice. And Agamemnon in general was the last of a long cursed lineage that basically kept going through these cycles of murder and revenge because of the sins of their ancestors. He’s not a positive model and the sacrifice might as well have been seen as an act of prideful selfishness on his part, not unlike Stannis Baratheon’s sacrifice of his daughter in Game of Thrones which is modelled after it.
I think we see this often in myths that stay with us as powerful allegories because they exemplify a trope or pattern that we may want to express. For example, the myth of Orpheus and Eurydice could exemplify how excessive greed or inability to control your urges can lead to losing everything. Orpheus quite literally fails the “marshmallow test”. A similar thing happens in the story of Eros and Psyche, though in this case through her own perseverance Psyche manages to eventually win back what she lost. David vs Goliath is a story about how moral fortitude, courage and wits can triumph in the face of naked brutish violence. The tale of Hua Mulan is a story about the conflict between duty to the family and duty to the law, and how one navigates that (so is the story of Antigone who buries her traitorous brother and gets executed for it).
The thing is also, these are patterns, not universal truths—you’ll sometimes find myths expressing opposing patterns because both can hold in the appropriate circumstances. And some myths simply express values that we do not acknowledge any more as worth uplifting. Abraham and Isaac is about blind obedience and faith unto God. The Tower of Babel is about how if you try to build or do something too ambitious you’ll get smacked down, and you should just know your place.
We have a lot more stories that have become established with this “mythic” power today, if anything. David and Goliath is also Frodo and Sauron or Luke Skywalker and the Death Star. If you think about the ties between power and responsibility your mind likely evokes Spider-Man’s famous motto. No parable about vicious ambition eating itself and leading to a disastrous fall is better known today than the Tale of Walter White, He Who Broke Bad. And the old stories aren’t dead. We know more about other cultures than we used to. We eat that shit up, if anything. We have games and shows and comics about the Greek-Roman gods and their myths, and about the Norse, and about the classic Biblical stories… Are these myths weakened by the fact we don’t literally believe in their truth any more? But well, look at for how long Christian Europe still hung onto classical pagan myths as a source of metaphor. You only need walk through a frescoed 18th century palace, go visit a museum, read the Divine Comedy to see medieval and early modern artists expressing themselves with the language of Greek gods and heroes. Did they literally believe those to be true? Obviously not, they were good Christians who would never do that. But they believed them to be meaningful and powerful and thus sort of story-true instead of true-true. I think we’re doing perfectly fine on that department, and if we’re not it’s because of limits and constraints to artistic expression which have more to do with its commercial model than any spiritual impoverishment. Requiring people to literally believe every single myth they reference is factual would require them to be naive idiots. And we still have our supposedly true-true myths that still are in some sense myths—meaning they double as powerful stories imbued with meaning. We have stories about the creation of the world by the Big Bang, about the rise and fall of the powerful Dinosaurs, about the rise of one clever ape who managed to spread across the world and become Man, about all sorts of kings and heroes and empires and their wars and struggles. I’d argue the mythical cycle built around World War 2 is in some sense the creation myth of the modern liberal world. These myths of course don’t quite look the same way that the Iliad and Odyssey did—but then, does anything look the same as 3000 years ago?
AFAIK the smoothness can add useful properties at training time, because the gradient is more well-behaved around zero. And ReLUs won over sigmoids because not being flat on both sides allowed their gradients to propagate better across several layers (whereas with a sigmoid as soon as you cap on either side the gradient becomes zero and it becomes very hard to dislodge the system from that local minimum).
NNs are weird functions but I don’t think you can really describe with smooth manifolds most stuff you do with ML. Kolmogorov-Arnold function approximators, which are sorta related to NNs (really NNs are a subset of K-A approximators), are known to be weird functions, not necessarily smooth. And lots of problems, like classification problem (which btw is essentially what text prediction is) aren’t smooth to begin with.
There is some intuition that you have to enforce some sort of smooth-like property as a way of generalizing the knowledge and combating overfitting; that’s what regularization is for. But it’s all very vibey. What you would really need is a proper universal prior for your function that you then update with your training data, and we have no idea what that looks like—only empirical knowledge that some shit seems to work better for whatever reason.
There was a shift, but it’s defended and rationalised in the terms I presented. Regardless of how and why the shift happened, many people eventually do simply believe in the rationalization itself, even if it emerged (probably not intentionally, but via selection effects) to simply fit the new shape of the coalition that was pushing it.
Why does it matter how long they get to experience the self-satisfaction after the action was performed?
Generally speaking I’d say utility is somewhat weighted by duration. I’d be suspicious of a utility function that says that one year of atrocious pain is as bad as one minute, for example.
Other than that, sure, I think it’s fair to say it’s selfish in that very broad sense. I guess my point is that I want to remark that it’s something that is ill-captured intuitively by terms like “self-interest”. To me interest implies some kind of objective direct benefit to my utility, as opposed to a more general goal/want that only implies my aim for something regardless of the reason for it. I’m not sure what a good term for this sort of want-for-want’s sake would be, to distinguish it for the more straightforward wanting something because it brings me pleasure, enjoyment, safety etc.
I suppose, abnegation of anything that can be construed as actually benefitting my utility function in ways other than the most abstract level of “I wanted X to happen and made it happen”. And I agree it can’t be any less than that.
Consider the extreme case of someone who sacrifices their life to save another. Even though they may derive serenity or satisfaction from that in their very last moments, it’s hard to construe that as “selfish” in any but the broadest sense, given they don’t even get to experience that for long. You can’t escape that broadest sense, I agree, but it’s so broad as to render the qualification essentially meaningless, especially compared to the usual understood meaning of “selfish”.
I guess my take might be somewhat warm to downright scalding hot, but I believe mathematical knowledge to be empirical, either in the sense that we acquire it from direct observation, or that it has been preprogrammed into our brains by evolution (which I distinguish as a case from truly transcendent knowledge, though I guess you can call that non empirical—it’s just labeling at that point, I agree that’s how math in our brain works).
At one level, “what I prefer” is information—it’s a sample of one but still the most detailed insight in a human mind I’ll ever have. In that sense, my preferences feed, together with other inputs, in an algorithm that outputs predictions and recommended actions.
But at a higher level, “what I prefer” is also the stuff that the algorithm itself is made of. Because ultimately everything always comes from me. Even if it’s something that I’m trying my very best to gather as empirical evidence from the world around me, it’s filtered by me. If I am King Solomon and must do the good thing when two women claim to be mothers of the same baby, I still need to have some way to judge and be convinced of which woman is lying and which is telling the truth. And whatever my process is, it may be inferred from my past experience, but still filtered through my own judgement, etcetera. Just like with scientific and empirical matters—I can try to update my beliefs to best approximate some ideal truth, but I can never state with 100% certainty that I have reached it.
I very explicitly said it’s not about self-interest, but rather about our epistemological relation to the world. That even if we have an idealised notion of what constitutes “good”, we can still only judge that good (and its eventual outcomes) from our own limited perspective. Even if our principle was just “listen to others and do for them what they ask”, we’re still having to do the parsing and interpretation of their words, modelling of how our actions will impact their utility, etc.
Plato’s Trolley
Right, but the problem is the people who believe in astrology (or who work for an astrology company, or whose friends are into astrology, etc.) will say “no, it’s wrong to criticize astrology” and the people who don’t have a stake in astrology will say “yes, it’s okay to criticize astrology” and there’s no neutral arbiter to adjudicate the disagreement.
That’s the bare minimum I can expect. The problem is when people who don’t believe in astrology still take it upon themselves to make it a general social rule that you simply shouldn’t criticize any sufficiently dearly-held beliefs because it makes people’s feelings hurt. Because that can tip the scales from a minority to a majority and establish norms that are in fact long term toxic. I actually remember some time ago some Twitter discourse about how love for astrology is feminine-coded, and therefore mocking astrology is in fact something men do to put women down or something. This one is a bit of a ridiculous example and not many people were going along with it, but there are bigger things (like the shift in attitudes towards religion and militant atheism) that instead matter more.
As for intent, I tend to favor treating intentional and unintentional machiavellianism the same, as doing otherwise just amounts to punishing people for having an accurate self-model, which seems like a bad way to promote truthseeking.
I don’t follow this bit, can you expand on it?
I don’t think this refers necessarily to intentional malice. Suppose there is someone who makes important, impactful decisions based on astrology. You can’t just tell them “hey you made a silly mistake reading the precise position of Mercury retrograde here, it happens”. You have to say “astrology is bunk and basing your decisions on it is dangerous”. But in a culture in which the rule is “if someone strongly enough believes in something—like astrology—that they’ve built their entire identity around it, attacking that something is the same as an attack on their person which will inflict suffering on them, and therefore shouldn’t be done”, that action is taboo. Which is the problem that the post gestures at, I think.
Of course one can argue that maybe it’s strategically better to not go too hard—if for example astrology is a majority belief and most people will side with the other person. But that’s a different story. If saying “hey people, this guy believes in astrology! Stop listening to him!” is enough to make them lose status, should you be able to do it or not? Which is more important, their personal sense of validation, or protecting the community from the consequences of their wrong beliefs?
Lots of strange things in math and computational science arise from recursion, so “a system that can think about itself” does sound like it might have something special going on. If we’re looking for consciousness in a purely materialistic/emergent way, rather than just posit it via dualism or panpsychism, I genuinely can’t think of many other serious leads to pursue.
I don’t know. Context window is essentially an AI’s short term memory. If self reflection was a condition of consciousness, prompting an AI to talk about itself could make it significantly more conscious than having it write Python code for a server.
Does that work? The effect is weak, the pressure is competing with a lot more significant causes of death. And myths spread horizontally too. They’re not single family things, there can’t be enough variability and isolation to have a full Darwinian selection because it’s not like you have the tribes with the wrong belief being completely exterminated by that mistake.
That said, reading up on it it sounds like cassava can also cause acute poisoning, and that sounds like a much stronger feedback signal for people to notice.