Recent Ph.D. in physics from MIT, Complex Systems enthusiast, AI researcher, digital nomad. http://pchvykov.com
pchvykov
For what it’s worth, let me just reply to your specific concern here: I think the value of anthropomorphization I tried to explain is somehow independent of whether we expect God to intervene or not. If you are saying that this “expectation” may be an undesirable side-effect, then that may be so for some people, but that does not directly contradict my argument. What do you think?
just updated the post to add this clarification about “too perfect”—thanks for your question!
I like the idea of agency being some sweet spot between being too simple and too complex, yes. Though I’m not sure I agree that if we can fully understand the algorithm, then we won’t view it as an agent. I think the algorithm for this point particle is simple enough for us to fully understand, but due to the stochastic nature of the optimization algorithm, we can never fully predict it. So I guess I’d say agency isn’t a sweet spot in the amount of computation needed, but rather in the amount of stochasticity perhaps?
As for other examples of “doing something so well we get a strange feeling,” the chess example wouldn’t be my go-to, since the action space there is somehow “small” since it is discrete and finite. I’m more thinking of the difference between a human ballet dancer, and an ideal robotic ballet dancer—that slight imperfection makes the human somehow relatable for us. E.g., in CGI you have to make your animated characters make some unnecessary movements, each step must be different than any other, etc. We often admire hand-crafted art more than perfect machine-generated decorations for the same sort of minute asymmetry that makes it relatable, and thus admirable. In voice recording, you often record the song twice for the L and R channels, rather than just copying (see ‘double tracking’) - the slight differences make the sound “bigger” and “more alive.” Etc, etc.
Does this make sense?
ah, yes! good point—so something like the presence of “unseen causes”?
The other hypothesis the lab I worked with looked into was the presence of some ‘internally generated forces’ - sort of like an ‘unmoved mover’ - which feels similar to what you’re suggesting?
In some way, this feels not really more general than “mistakes,” but sort of a different route. Namely, I can imagine some internal forces guiding a particle perfectly through a maze in a way that will still look like an automaton
Just posted it, feels like the post came out fairly basic, but still curious of your opinion: https://www.lesswrong.com/posts/aMrhJbvEbXiX2zjJg/mistakes-as-agency
yeah, I thought so too—but I only had very preliminary results, not enough for a publication… but perhaps I could write up a post based on what I had
thanks for the support! And yes, definitely closely related to questions around agency. With agency, I feel there are 2 parallel, and related, questions: 1) can we give a mathematical definition of agency (and here I think of info-theoretic measures, abilities to compute, predict, etc) and 2) can we explain why we humans view some things as more agent-like than others (and this is a cognitive science question that I worked on a bit some years ago with these guys: http://web.mit.edu/cocosci/archive/Papers/secret-agent-05.pdf ). I didn’t get to publishing my results—but I was discovering something very much like what you write. I was testing the hypothesis that if a thing seems to “plan” further ahead, we view it as an agent—but instead was finding that actually the number of mistakes it makes in the planning is more important.
I really appreciate your care in having a supportive tone here—it is a bit heart aching to read some of the more directly critical comments.
great point about the non-consentual nature of Ea’s actions—it does create a dark undertone to the story, and needs either correcting, or expanding (perhaps framing it as the source of the “shadow of sexuality”—so we might also remember the risks)
the heteronormative line I did notice, and I think could generalize straightforwardly—this was just the simplest place to start. I love your suggestion of “”sex” as acting on a body specifically to produce pleasure in that body.”
And yes, there are definitely many many aspects of sex that can then be addressed within this lore—like rape, consent, STD, procreation, sublimation, psychological impacts, gender, family, etc. Taking the Freudian approach, we could really frame all aspects of human life within this context—could be a fun exercise.
I guess the key hypothesis I’m suggesting here is that explaining the many varied aspects of sexuality in terms of a deity could help to clarify all its complexity—just as the pantheon of gods helped early pagan cultures make sense of the world and make some successful predictions / inventions. It could be nicer to have a science-like explanation, but people would have a harder time keeping that straight (and I believe we don’t yet have enough consensus in psychology as a science anyway).
yeah I don’t know how cultural myths like Santa form or where they start—now they are grounded in rituals, but I haven’t looked at how they were popularized in the first place.
hmm, with all this feedback I’m wondering if my framing of this story as “sex-ed to smooth out the impact of puberty” is not quite fitting. I definitely have a sense that this story can play some beneficial role in promoting a more healthy sexuality in our society—though perhaps my framing about puberty is misplaced?
huh, thanks for the engagement guys—I definitely didn’t anticipate this to be so triggering…
I’m hearing two separate points here: 1) magic creatures and fairy tails do more to confuse rather than clarify; 2) let’s be careful not to scare kids about sex nor make it a bigger deal than it already is. I think we could have a rich discourse about each of these, and I see many arguments to be made for both sides—with neither being a clearly resolved issue, imho. Just as an example, here are some possible counters I see to these:
1) What role do fairy tails and lore play in our education and building understanding? For one, “all models are wrong, some are useful”—so I don’t think that whether Santa exists or not is really the interesting question, I’d rather ask in what ways is it helpful / confusing? As far as story-telling is a good vehicle for humans to convey values and information, it serves its purpose. As far as lying to kids—I’d say we can keep Santa without claiming things about him that aren’t true. I think another important purpose of such lore is ritual—of which Christmas is an example. Ritual practices have a clear role and impact on people, that can be cognitively very beneficial if not abused.
2) Yes, sex may already “too big of a deal,” but not in ways that are constructive / helpful. The hormonal impact of sex on our mind itself is hard to overstate—it really is a huge deal, for some people more than others. Since this is a question of qualia, I can reliably talk only about personal experience—and in retrospect I see that it ran my life for a number of years, the more so the more I repressed it. Learning to sublimate that energy, and really enjoy it in areas of life outside of sex has been the single greatest shift I experienced in persistent personal happiness, energy, and productivity. And this is what I’m referring to in this story—to me, sex and its broader impact is the most magical thing I have experienced in life, and so if anything is worth calling magical, I’d say this is it.
Of course, both of these points are a biased side of the full story, and I wouldn’t personally 100% agree with these, as reality is always more subtle and balanced than such arguments. If you like, check out some other, perhaps more scientific discussions I wrote around related topics:
a rationalist perspective on “magic”: https://www.lesswrong.com/posts/uRiiNMCDdNnGo3Lqa/magic-tricks-and-high-dimensional-configuration-spaces
Is Santa Real—as an effective theory: https://www.pchvykov.com/post/is-santa-real
oh yeah, I’ve seen that one before—really awesome stuff! I guess you could say the goalkeeper discovers a “mental” dimension whereby it can beat the attacker easier than if it uses the “physical” dimensions of directly blocking.
This all also feels related to Goodhart’s law—though subtly different...
Check out the follow up post on this
wow… I definitely did not know we were that intense with making things artificial..
and I like that argument to draw a parallel with horses—quite convincing.
I’m really interested in the question of what’s the difference between human systems and things like ecosystems? There are definitely some advantages biological systems have—antifragility, adaptability, sustainability. On the other hand, as you point out, human-designed systems are more efficient, but at a more narrow task.
So are there structural lessons we could adapt from biological system designs? Or are we good where we are?
Thanks for all the great comments! - I feel like the follow-up post I just published gets at some of them: https://www.lesswrong.com/posts/WNjKyFxNbhonBvhwz/building-cars-we-don-t-understand
oooh, don’t get me started on expectation values… I have heated opinions here, sorry. The two easiest problems with expectations in this case is that to average something, you need to average over some known space, according to some chosen measure—neither of which will be by any means obvious in a real-world scenario. More subtly, with real-world distributions, expectation values can often be infinite or undefined, and median might be more representative—but then should you look at mean, median, or what else?
To me, the counter-argument to saving drowning children isn’t the admittedly unlikely “Hitler” one, but more the “let them learn on their own mistakes” one—some will learn to swim and grow up more resilient, and some won’t. The long-term impact of this approach on our species seems much harder to quantify.
wonderful—thanks so much for the references! “moral case against leaving the house” is a nice example to have in the back pocket :)
Just read a bit about rationalist understanding of “ritual”—seems that I’m sort of arguing that the value in donating is largely ritualistic :)
Wow, wonderful analysis! I’m on-board mostly—except maybe I’d leave some room for doubt of some claims you’re making.
And your last paragraph seems to suggest that a “sufficiently good and developed” algorithm could produce large cultural change?
Also, you say “as human mediators (plus the problem of people framing it as ‘objective’), just cheaper and more scalable”—to me that would quite a huge win! And I sort of thought that “people framing it as objective” is a good thing—why do you think it’s a problem?
I could even go as far as saying that even if it was totally inaccurate, but unbiased—like a coin-flip—and if people trusted it as objectively true, that would already help a lot! Unbiased = no advantage to either side. Trusted = no debate about who’s right. Random = no way to game it.
I think your perspective also relies on an implicit assumption which may be flawed. Not quite sure what it is exactly—but something around assuming that agents are primarily goal-directed entities. This is the game-theoretic context—and in that case, you may be quite right.
But here I’m trying to point out precisely that people have qualities beyond the assumptions of a game-theoretic setup. Most of the times we don’t actually know what our goals are or where those goals came from. So I guess here I’m thinking of people more as dynamical systems.