links 01/27/25: https://roamresearch.com/#/app/srcpublic/page/01-27-2025
https://asteriskmag.com/issues/09/where-the-wild-things-arent Agnes Callard is strange and unsettling here. I wonder what this is “really” about.
“Being a person is too hard a job to leave to a single person. We can’t do it on our own, not even as adults. Figuring out how to be a person is a group project, and we have to help each other. But the catch is that we don’t really know what we are doing, so sometimes we end up hurting each other instead. When you are weird, you experience this hurt. Social categories have been poorly constructed and fail to conduce to human happiness. The weird person is a record of the mistakes we have made.”
it sounds like her own “weirdness” is experienced as a source of pain, not as a stable and beloved personal identity. and that she’s uncomfortable with public celebration of “weirdness”.
admittedly it is a bit paradoxical that contemporary culture celebrates “weirdos” in a not-very-individualistic way. the nonconformists who are celebrated do, actually, conform to their own little club rules.
this never particularly bothered me, though!
if i find a club i want to be a member of, that’s fantastic. i’m not attached to being literally unique.
if i find that existing “labels” or “groups” don’t entirely suit me as an individual, that can be a bummer, but i don’t think i would be better off if no fine-grained identity categories existed and i was expected to conform with everyone in my geographical location.
there’s a very natural explanation about why children’s books star alienated weirdos: writers are not typical people!
beloved children’s book authors were writing to children like me, the children who read a lot of books and might grow up to be writers ourselves.
this isn’t some paradoxical thing.
what is “normal” (both common and normative) in the world of books is what is “normal” for text-native obligate readers and writers, which does in fact mean being different from the majority! Bookishness is a minority trait!
Bookish people, as a rule, are glad we are this way, and eager to acculturate potential kindred spirits into bookishness. This seems generally healthy to me.
sure, be a little thoughtful about not making depictions of alienation into self-fulfilling prophecies, but I think a little bit of care and taste suffices. no need to angst about “what if we are BAD ROLE MODELS”. it’s okay to like your own quirks.
https://www.ams.org/journals/notices/202502/noti3114/noti3114.html anatomy of a Lean proof
https://en.m.wikipedia.org/wiki/Marquis_de_Sade
he sounds genuinely awful, though i haven’t read his writing
links 01/29/25: https://roamresearch.com/#/app/srcpublic/page/01-29-2025
https://marginalrevolution.com/marginalrevolution/2025/01/its-time-to-build-the-peptidome.html link of a piece I’ve already read by Maxwell Tabarrok about the value of building a dataset of peptides and their sequences, structures, and properties, for training models that can discover peptides that can be used pharmacologically (for instance, as antimicrobials).
https://www.biorxiv.org/content/10.1101/2025.01.24.634830v1 using modified T-regulatory cells can be turned against the macrophage foam cells that produce atherosclerosis plaque, reducing heart disease (in mice).
how do they do it?
oxidized LDL is a marker of inflammation and atherosclerosis. It’s also a protein (the acronym is for Low Density Lipoprotein) so you can target it with an antibody!
regulatory T cells (Treg) genetically modified to contain one of these anti-OxLDL antibodies prevents the OxLDL from getting into macrophages and turning them into foam cells.
inject them into live mice on a high-fat diet and they also get less atherosclerosis than control mice
interestingly they make Tregs from CD4 (helper) T-cells by getting them to express FOXP3. this will be useful for human clinical application, since natural Tregs are rare and it can be difficult to extract them in sufficient quantity.
https://www.wired.com/story/elon-musk-lackeys-office-personnel-management-opm-neuralink-x-boring-stalin/
https://talentmarket.org/cato-psych-policy-analyst/ ugh. they basically have already written the bottom line on what they want a new hire to think about the “psychology of progress”, and it’s mostly talking points that have been made ad infinitum already.
surely if you were serious about overcoming psychological barriers in the general public that make them ill-disposed to objectively beneficial economic/technological changes, you’d want to look for new ideas (since clearly past ones haven’t worked). you’d also want to focus on starting with empathy and common ground, if the hope is to actually change minds. this is a job description that, ironically, guarantees stasis, not progress.
https://stephenmalina.com/post/2023-11-04-biologizing-the-stack/ I’m glad he admits this is a contrarian position because the straightforward lesson of the past several decades is that you want as few things to be biomanufactured as possible, living things make incredibly cost-inefficient factories and should be a last resort.
https://unstableontology.com/2022/05/02/on-the-paradox-of-tolerance-in-relation-to-fascism-and-online-content-moderation/ I like this post, but on the general topic, how could you possibly implement Popper’s standard of “we must not tolerate speech that incites people to not listen to arguments”? That would, itself, require an extensive censorship apparatus.
1st Amendment Law seems much more practical in protecting all political opinions and having exceptions only for “true threats” or narrow “incitement to violence”. Saying “don’t read this book, it’s by a fascist” is intolerance by Popper’s definition, but it’s straightforwardly protected speech under US law and I think it should be; we don’t trust a government agency to adjudicate questions of epistemic vice.
1st Amendment law has almost a refreshingly nihilistic attitude to discussion—all “opinion”, including iirc all discussion on social media, is assumed to be neither true or false, so it can’t be considered defamatory. basically, under the law, “this is all just yapping, people get to yap, call me when they make a false factual claim that costs you money, or make an actual plan to physically hurt someone”. sometimes, in addition to being a safer standard for one’s government to hold, this is a healthy attitude to adopt oneself!
Stephen Wolfram on machine learning: https://writings.stephenwolfram.com/2024/08/whats-really-going-on-in-machine-learning-some-minimal-models/
first, we visualize how values at the nodes of a neural network (during inference) change based on the inputs. complicated!
then we look at how weights on edges change as a neural network is trained. also complicated! though you can see the changes getting smaller as the network approaches convergence.
a mesh neural net—each node is only connected to its neighbors in a grid—is a cellular automaton. Cellular automata are a special case of neural networks, in other words.
you can do an analogue of “training a model to fit a function” with something called a “rule array”—you have a grid, and each square can have a local update rule relative to its neighbors, cellular-automata style, but the rule might be different for different squares. If your “input” is a black square at the top row, then “running” the automata rule repeatedly may propagate down a black-and-white pattern on the grid. you can then “adapt” the rule array iteratively to get a desired pattern, like “generate one that lasts for exactly 30 steps”. or a function: you want to get something that gives certain “outputs” on the bottom row, given an “input” at the top row. then “mutate” cells at random, keeping only the results where the distance from the “training examples” doesn’t change.
you can observe which cells mutate along the training runs; they seem to be the ones along the “ideal path” between x (at the top) and f(x) (at the bottom).
doing mutations at random is inefficient; you can do an equivalent of “steepest descent” in cellular automata too, but this tends to get stuck
there’s an equivalent of “backpropagation” in automata-land too
in general, why translate to automata? he says they’re easier to “inspect” because simpler, but I kinda don’t get it.
he even builds a discrete analog of a transformer!
“It could have been that machine learning would somehow “crack systems”, and find simple representations for what they do. But that doesn’t seem to be what’s going on at all. Instead what seems to be happening is that machine learning is in a sense just “hitching a ride” on the general richness of the computational universe. It’s not “specifically building up behavior one needs”; rather what it’s doing is to harness behavior that’s “already out there” in the computational universe.”
yep this is a convergent idea. the secret sauce in machine learning is just having a rich enough space of possible functions, and a means of variation and selection. you can get similar results using things like genetic algorithms or cellular automata that aren’t “neural networks” at all.
Stephen Wolfram on formal verification: https://writings.stephenwolfram.com/2025/01/who-can-understand-the-proof-a-window-on-formalized-mathematics/
Wolfram Language allows proof verification!
you can also visualize the dependency graph of lemmas.
sometimes the “high-degree nodes” are legible things like commutativity
but often they, and “commonly used lemmas” in randomly generated proofs, are weird elaborate non-human-interpretable things.
What about proof-to-proof equivalences?
this is where the “homotopy” metaphors come from. you could find a “path” from one proof to another...but what if there are “holes” in proof-space? “Then a “continuous deformation” of one proof into another will get stuck, and even if there is a much shorter proof, we’re liable to get “topologically stuck” before we find it.”
https://kmill.github.io/informalization/ucsc_cse_talk.pdf
nice explanation of formalization (in Lean), auto-formalization and auto-informalization
if you just throw ChatGPT at this, it sucks.
so let’s do a little ontology hard-coding.
https://moreisdifferent.blog/p/german-scientific-paternalism how Germany in the late 19th-early 20th century trained scientists
https://kordinglab.com/about/ an approach to figuring out what neurons do by mapping between simulations, electrophysiology data, and psychophysics. “what algorithm is being implemented here?”
https://www.hypothesisfund.org/ a Reid Hoffman project: funds breakthrough research, mostly life sciences.
this doesn’t “smell” aggressive enough to me—the projects look fine but i’m surprised they’d be unfundable elsewhere—but maybe it’s just an insufficiently pointed communication style.
https://en.wikipedia.org/wiki/Robert_Duggan_(venture_capitalist)trange fellow. apparently he’s a Scientologist and knows nothing about biology but has a knack for picking winners. picked up ibrutinib!!!