Mostly agree, but I think it’s wrong to imagine that you need a popular ideology to get a dictatorship. In Mao’s or Pol Pot’s case for instance, it was really driven by a minority of “fanatics” at first. Then a larger number of people can (at least pretend to) join the rank, and even become enforcers, until a stable equilibrium is reached: you don’t know who is pretending and who is a true believer, and trying to guess can be very risky.
Troof
I wonder if this has anything to do with Kalshi. The CFTC seems much more eager to go after weirder predictions markets (like Polymarket some time ago) now that it has a legit one.
I like the tree example, and I think it’s quite useful (and fun) to think of dumb and speculative way for the genome to access world concept. For instance, in response to “I infer that the genome cannot directly specify circuitry which detects whether you’re thinking about your family”, the genome could:
Hardcode a face detector, and store the face most seen during early childhood (for instance to link them to the reward center).
Store faces of people with an odor similar to amniotic fluid odor or with a weak odor (if you’re insensitive to your own smell and family member have a more similar smell)
In these cases, I’m not sure if it counts for you as the genome directly specifying circuitry, but it should quite robustly point to a real world concept (which could be “gamed” in certain situations like adoptive parents, but I think that’s actually what happens)
Can you be a donor while living in Europe?
You might be interested in this blog post, which develops similar ideas and use the same cache analogy.
For a more precise probabilistic approach to Fermat’s last theorem by Feynman, which take into account the repartition of the nth powers, see this amazing article http://www.lbatalha.com/blog/feynman-on-fermats-last-theorem
Thanks for this! One thing I don’t understand about influence functions is: why should I care about the proximal Bregman objective? To interpret a model, I’m really interested in in the LOO retraining, right? Can we still say things like “it seems that the model relied on this training sample for producing this output” with the PBO interpretation?