Recent Ph.D. in physics from MIT, Complex Systems enthusiast, AI researcher, digital nomad. http://pchvykov.com
pchvykov
Values Darwinism
Yes! and here we are trying to study the spectral properties of said noise to try to reverse-engineer your radio, as well as understand the properties of electromagnetic field itself. So perhaps that’s one way to look at the practice :)
Aura as a proprioceptive glitch
Can you please commercialize this gem? I (and probably many others) would totally buy it—but making it myself is a bit of a hurdle...
So yes, I agree that intolerance can also be contagious—and it’s sort of a quantitative question of which one outweighs the other. I don’t personally believe in “evil” (as you sort of hint there, I believe that if we are sufficiently eager to understand, we can always find common humanity with anyone) - but all kinds of neurodivergences, such as biological lack of empathy, do exist, and while we need not stigmatize them, they may be socially disruptive (like torching a city). Again, the question of whether our absolutely tolerant society can be stable in face of psychopaths torching cities once in a while I think is a quantitative one.
But what I’m excited about here is that in the case that those quantities are sufficient (tolerance is sufficiently contagious, psychopaths are sufficiently rare, etc), then we could have an absolutely tolerant society—even in that pacifist way you don’t quite like. And that possibility in itself I find exciting. And that possibility is something that I think Popper did not see.
While these are relevant elaborations on the paradox of tolerance, I’d also be curious to hear your opinion on the proposal I’m making here—could tolerance be contagious, without any intentional action to make it so (violent or otherwise)? If so, could that make the existence of an absolutely tolerant society conceivable?
I think your perspective also relies on an implicit assumption which may be flawed. Not quite sure what it is exactly—but something around assuming that agents are primarily goal-directed entities. This is the game-theoretic context—and in that case, you may be quite right.
But here I’m trying to point out precisely that people have qualities beyond the assumptions of a game-theoretic setup. Most of the times we don’t actually know what our goals are or where those goals came from. So I guess here I’m thinking of people more as dynamical systems.
Against the paradox of tolerance
For what it’s worth, let me just reply to your specific concern here: I think the value of anthropomorphization I tried to explain is somehow independent of whether we expect God to intervene or not. If you are saying that this “expectation” may be an undesirable side-effect, then that may be so for some people, but that does not directly contradict my argument. What do you think?
And the word was “God”
just updated the post to add this clarification about “too perfect”—thanks for your question!
I like the idea of agency being some sweet spot between being too simple and too complex, yes. Though I’m not sure I agree that if we can fully understand the algorithm, then we won’t view it as an agent. I think the algorithm for this point particle is simple enough for us to fully understand, but due to the stochastic nature of the optimization algorithm, we can never fully predict it. So I guess I’d say agency isn’t a sweet spot in the amount of computation needed, but rather in the amount of stochasticity perhaps?
As for other examples of “doing something so well we get a strange feeling,” the chess example wouldn’t be my go-to, since the action space there is somehow “small” since it is discrete and finite. I’m more thinking of the difference between a human ballet dancer, and an ideal robotic ballet dancer—that slight imperfection makes the human somehow relatable for us. E.g., in CGI you have to make your animated characters make some unnecessary movements, each step must be different than any other, etc. We often admire hand-crafted art more than perfect machine-generated decorations for the same sort of minute asymmetry that makes it relatable, and thus admirable. In voice recording, you often record the song twice for the L and R channels, rather than just copying (see ‘double tracking’) - the slight differences make the sound “bigger” and “more alive.” Etc, etc.
Does this make sense?
ah, yes! good point—so something like the presence of “unseen causes”?
The other hypothesis the lab I worked with looked into was the presence of some ‘internally generated forces’ - sort of like an ‘unmoved mover’ - which feels similar to what you’re suggesting?
In some way, this feels not really more general than “mistakes,” but sort of a different route. Namely, I can imagine some internal forces guiding a particle perfectly through a maze in a way that will still look like an automaton
Just posted it, feels like the post came out fairly basic, but still curious of your opinion: https://www.lesswrong.com/posts/aMrhJbvEbXiX2zjJg/mistakes-as-agency
Mistakes as agency
yeah, I thought so too—but I only had very preliminary results, not enough for a publication… but perhaps I could write up a post based on what I had
thanks for the support! And yes, definitely closely related to questions around agency. With agency, I feel there are 2 parallel, and related, questions: 1) can we give a mathematical definition of agency (and here I think of info-theoretic measures, abilities to compute, predict, etc) and 2) can we explain why we humans view some things as more agent-like than others (and this is a cognitive science question that I worked on a bit some years ago with these guys: http://web.mit.edu/cocosci/archive/Papers/secret-agent-05.pdf ). I didn’t get to publishing my results—but I was discovering something very much like what you write. I was testing the hypothesis that if a thing seems to “plan” further ahead, we view it as an agent—but instead was finding that actually the number of mistakes it makes in the planning is more important.
A physicist’s approach to Origins of Life
I really appreciate your care in having a supportive tone here—it is a bit heart aching to read some of the more directly critical comments.
great point about the non-consentual nature of Ea’s actions—it does create a dark undertone to the story, and needs either correcting, or expanding (perhaps framing it as the source of the “shadow of sexuality”—so we might also remember the risks)
the heteronormative line I did notice, and I think could generalize straightforwardly—this was just the simplest place to start. I love your suggestion of “”sex” as acting on a body specifically to produce pleasure in that body.”
And yes, there are definitely many many aspects of sex that can then be addressed within this lore—like rape, consent, STD, procreation, sublimation, psychological impacts, gender, family, etc. Taking the Freudian approach, we could really frame all aspects of human life within this context—could be a fun exercise.
I guess the key hypothesis I’m suggesting here is that explaining the many varied aspects of sexuality in terms of a deity could help to clarify all its complexity—just as the pantheon of gods helped early pagan cultures make sense of the world and make some successful predictions / inventions. It could be nicer to have a science-like explanation, but people would have a harder time keeping that straight (and I believe we don’t yet have enough consensus in psychology as a science anyway).
yeah I don’t know how cultural myths like Santa form or where they start—now they are grounded in rituals, but I haven’t looked at how they were popularized in the first place.
Thanks for your comment!
From this and other comments, I get the feeling I didn’t make my goal clear: I’m trying to see if there is any objective way to define progress / values (starting from assuming moral relativism). I’m not tryin to make any claim as to what these values should be. Darwinian argument is the only one I’ve encountered that made sense to me—and so here I’m pushing back on it a bit—but maybe there are other good ways to objectively define values?
Imho, we tend to implicitly ground many of our values in this Darwinian perspective—hence I think it’s an important topic.
I like what you point out about the distinction between prescriptive vs descriptive values here. Within moral relativism, I guess there is nothing to say about prescriptive values at all. So yes, Darwinism can only comment on descriptive values.
However, I don’t think this is quite the same as the fallacies you mention. “Might makes right” (Darwinian) is not the same as “natural makes right”—natural is a series of historical accidents, while survival of the fittest is a theoretical construct (with the caveat that at the scale of nations, number of conflicts is small, so historical accidents could become important in determining “fittest”). Similarly, “fittest” as determined by who survives seems like an objective fact, rather than a mind projection (with the caveat that an “individual” may be a mind projection—but I think that’s a bit deeper).