Psychotic “delusions” are more about holding certain genres of idea with a socially inappropriate amount of intensity and obsession than holding a false idea. Lots of non-psychotic people hold false beliefs (eg religious people). And, interestingly, it is absolutely possible to hold a true belief in a psychotic way.
I have observed people during psychotic episodes get obsessed with the idea that social media was sending them personalized messages (quite true; targeted ads are real) or the idea that the nurses on the psych ward were lying to them (they were).
Preoccupation with the revelation of secret knowledge, with one’s own importance, with mistrust of others’ motives, and with influencing others’ thoughts or being influenced by other’s thoughts, are classic psychotic themes.
And it can be a symptom of schizophrenia when someone’s mind gets disproportionately drawn to those themes. This is called being “paranoid” or “grandiose.”
But sometimes (and I suspect more often with more intelligent/self-aware people) the literal content of their paranoid or grandiose beliefs is true!
sometimes the truth really has been hidden!
sometimes people really are lying to you or trying to manipulate you!
sometimes you really are, in some ways, important! sometimes influential people really are paying attention to you!
of course people influence each others’ thoughts—not through telepathy but through communication!
a false psychotic-flavored thought is “they put a chip in my brain that controls my thoughts.” a true psychotic-flavored thought is “Hollywood moviemakers are trying to promote progressive values in the public by implanting messages in their movies.”
These thoughts can come from the same emotional drive, they are drawn from dwelling on the same theme of “anxiety that one’s own thoughts are externally influenced”, they are in a deep sense mere arbitrary verbal representations of a single mental phenomenon...
but if you take the content literally, then clearly one claim is true and one is false.
and a sufficiently smart/self-aware person will feel the “anxiety-about-mental-influence” experience, will search around for a thought that fits that vibe but is also true, and will come up with something a lot more credible than “they put a mind-control chip in my brain”, but is fundamentally coming from the same motive.
There’s an analogous but easier to recognize thing with depression.
A depressed person’s mind is unusually drawn to obsessing over bad things. But this obviously doesn’t mean that no bad things are real or that no depressive’s depressing claims are true.
When a depressive literally believes they are already dead, we call that Cotard’s Delusion, a severe form of psychotic depression. When they say “everybody hates me” we call it a mere “distorted thought”. When they talk accurately about the heat death of the universe we call it “thermodynamics.” But it’s all coming from the same emotional place.
In general, mental illnesses, and mental states generally, provide a “tropism” towards thoughts that fit with certain emotional/aesthetic vibes.
Depression makes you dwell on thoughts of futility and despair
Anxiety makes you dwell on thoughts of things that can go wrong
Mania makes you dwell on thoughts of yourself as powerful or on the extreme importance of whatever you’re currently doing
Paranoid psychosis makes you dwell on thoughts of mistrust, secrets, and influencing/being influenced
You can, to some extent, “filter” your thoughts (or the ones you publicly express) by insisting that they make sense. You still have a bias towards the emotional “vibe” you’re disposed to gravitate towards; but maybe you don’t let absurd claims through your filter even if they fit the vibe. Maybe you grudgingly admit the truth of things that don’t fit the vibe but technically seem correct.
this does not mean that the underlying “tropism” or “bias” does not exist!!!
this does not mean that you believe things “only because they are true”!
in a certain sense, you are doing the exact same thing as the more overtly irrational person, just hiding it better!
the “bottom line” in terms of vibe has already been written, so it conveys no “updates” about the world
the “bottom line” in terms of details may still be informative because you’re checking that part and it’s flexible
“He’s not wrong but he’s still crazy” is a valid reaction to someone who seems to have a mental-illness-shaped tropism to their preoccupations.
eg if every post he writes, on a variety of topics, is negative and gloomy, then maybe his conclusions say more about him than about the truth concerning the topic;
he might still be right about some details but you shouldn’t update too far in the direction of “maybe I should be gloomy about this too”
Conversely, “this sounds like a classic crazy-person thought, but I still separately have to check whether it’s true” is also a valid and important move to make (when the issue is important enough to you that the extra effort is worth it).
Just because someone has a mental illness doesn’t mean every word out of their mouth is false!
(and of course this assumption—that “crazy” people never tell the truth—drives a lot of psychiatric abuse.)
I once saw a video on Instagram of a psychiatrist recommending to other psychiatrists that they purchase ear scopes to check out their patients’ ears, because: 1. Apparently it is very common for folks with severe mental health issues to imagine that there is something in their ear (e.g., a bug, a listening device) 2. Doctors usually just say “you are wrong, there’s nothing in your ear” without looking 3. This destroys trust, so he started doing cursory checks with an ear scope 4. Far more often than he expected (I forget exactly, but something like 10-20%ish), there actually was something in the person’s ear—usually just earwax buildup, but occasionally something else like a dead insect—that was indeed causing the sensation, and he gained a clinical pathway to addressing his patients’ discomfort that he had previously lacked
It’s pretty far from meeting dath ilan’s standard though; in fact an x-ray would be more than sufficient as anyone capable of putting something in someone’s ear would obviously vastly prefer to place it somewhere harder to check, whereas nobody would be capable of defeating an x-ray machine as metal parts are unavoidable.
This concern pops up in books on the Cold War (employees at every org and every company regularly suffer from mental illnesses at somewhere around their base rates, but things get complicated at intelligence agencies where paranoid/creative/adversarial people are rewarded and even influence R&D funding) and an x-ray machine cleanly resolved the matter every time.
Schizophrenia is the archetypal definitely-biological mental disorder, but recently for reasons relevant to the above, I’ve been wondering if that is wrong/confused. Here’s my alternate (admittedly kinda uninformed) model:
Psychosis is a biological state or neural attractor, which we can kind of symptomatically characterize, but which really can only be understood at a reductionistic level.
One of the symptoms/consequences of psychosis is getting extreme ideas at extreme amounts of intensity.
This symptom/consequence then triggers a variety of social dynamics that give classic schizophrenic-like symptoms such as, as you say, “preoccupation with the revelation of secret knowledge, with one’s own importance, with mistrust of others’ motives, and with influencing others’ thoughts or being influenced by other’s thoughts”
That is, if you suddenly get an extreme idea (e.g. that the fly that flapped past you is a sign from god that you should abandon your current life), you would expect dynamics like:
People get concerned for you and try to dissuade you, likely even conspiring in private to do so (and even if they’re not conspiring, it can seem like a conspiracy). In response, it might seem appropriate to distrust them.
Or, if one interprets it as them just lacking the relevant information, one needs to develop some theory of why one has access to special information that they don’t.
Or, if one is sympathetic to their concern, it would be logical to worry about one’s thoughts getting influenced.
But these sorts of dynamics can totally be triggered by extreme beliefs without psychosis! This might also be related to how Enneagram type 5 (the rationalist type) is especially prone to schizophrenia-like symptoms.
(When I think “in a psychotic way”, I think of the neurological disorder, but it seems like the way you use it in your comment is more like the schizophrenia-like social dynamic?)
In general, mental illnesses, and mental states generally, provide a “tropism” towards thoughts that fit with certain emotional/aesthetic vibes.
Depression makes you dwell on thoughts of futility and despair
Anxiety makes you dwell on thoughts of things that can go wrong
Mania makes you dwell on thoughts of yourself as powerful or on the extreme importance of whatever you’re currently doing
Paranoid psychosis makes you dwell on thoughts of mistrust, secrets, and influencing/being influenced
Also tangential, this is sort of a “general factor” model of mental states. That often seems applicable, but recently my default interpretation of factor models has been that they tend to get at intermediary variables and not root causes.
Let’s take an analogy with computer programs. If you look at the correlations in which sorts of processes run fast or slow, you might find a broad swathe of processes whose performance is highly correlated, because they are all predictably CPU-bound. However, when these processes are running slow, there will usually be some particular program that is exhausting the CPU and preventing the others from running. This problematic program can vary massively from computer to computer, so it is hard to predict or model in general, but often easy to identify in the particular case by looking at which program is most extreme.
Thank you, this is interesting and important. I worry that it overstates similarity of different points on a spectrum, though.
in a certain sense, you are doing the exact same thing as the more overtly irrational person, just hiding it better!
In a certain sense, yes. In other, critical senses, no. This is a case where quantitative differences are big enough to be qualitative. When someone is clinically delusional, there are a few things which distinguish it from the more common wrong ideas. Among them, the inability to shut up about it when it’s not relevant, and the large negative impact on relationships and daily life. For many many purposes, “hiding it better” is the distinction that matters.
I fully agree that “He’s not wrong but he’s still crazy” is valid (though I’d usually use less-direct phrasing). It’s pretty rare that “this sounds like a classic crazy-person thought, but I still separately have to check whether it’s true” happens to me, but it’s definitely not never.
For a while I ended up spending a lot of time thinking about specifically the versions of the idea where I couldn’t easily tell how true they were… which I suppose I do think is the correct place to be paying attention to?
One has to be a bit careful with this though. E.g. someone experiencing or having experienced harassment may have a seemingly pathological obsession on the circumstances and people involved in the situation, but it may be completely proportional to the way that it affected them—it only seems pathological to people who didn’t encounter the same issues.
If it’s not serving them, it’s pathological by definition, right?
So obsessing about exactly those circumstances and types of people could be pathological if it’s done more than will protect them in the future, weighing in the emotional cost of all that obsessing.
Of course we can’t just stop patterns of thought as soon as we decide they’re pathological. But deciding it doesn’t serve me so I want to change it is a start.
Yes, it’s proportional to the way it affected them—but most of the effect is in the repetition of thoughts about the incident and fear of future similar experiences. Obsessing about unpleasant events is natural, but it often seems pretty harmful itself.
Trauma is a horrible thing. There’s a delicate balance between supporting someone’s right and tendency to obsess over their trauma while also supporting their ability to quit re-traumatizing themselves by simulating their traumatic event repeatedly.
If it’s not serving them, it’s pathological by definition, right?
This seems way too strong, otherwise any kind of belief or emotion that is not narrowly in pursuit of your goals is pathological.
I completely agree that it’s important to strike a balance between revisiting the incident and moving on.
but most of the effect is in the repetition of thoughts about the incident and fear of future similar experiences.
This seems partially wrong. The thoughts are usually consequences of the damage that is done, and they can be unhelpful in their own right, but they are not usually the problem. E.g. if you know that X is an abuser and people don’t believe you, I wouldn’t go so far as saying your mental dissonance about it is the problem.
Some psychiatry textbooks classify “overvalued ideas” as distinct from psychotic delusions.
Depending on how wide you make the definition, a whole rag-bag of diagnoses from the DSM V are overvalued ideas (e.g, anorexia nervosa over valuing being fat).
it’s wrong to try to control people or stop them from doing locally self-interested & non-violent things in the interest of “humanity’s future”, in part because this is so futile.
if the only way we survive is if we coerce people to make a costly and painful investment in a speculative idea that might not even work, then we don’t survive! you do not put people through real pain today for a “someday maybe!” This applies to climate change, AI x-risk, and socially-conservative cultural reform.
most cultures and societies in human history have been so bad, by my present values, that I’m not sure they’re not worse than extinction, and we should expect that most possible future states are similarly bad;
history clearly teaches us that civilizations and states collapse (on timescales of centuries) and the way to bet is that ours will as well, but it’s kind of insane hubris to think that this can be prevented;
the literal species Homo sapiens is pretty resilient and might avoid extinction for a very long time, but have you MET Homo sapiens? this is cold fucking comfort! (see e.g. C. J. Cherryh’s vision in 40,000 in Gehenna for a fictional representation not far from my true beliefs — we are excellent at adaptation and survival but when we “survive” this often involves unimaginable harshness and cruelty, and changing into something that our ancestors would not have liked at all.)
identifying with species-survival instead of with the stuff we value now is popular among the thoughtful but doesn’t make any sense to me;
in general it does not make sense, to me, to compromise on personal values in order to have more power/influence. you will be able to cause stuff to happen, but who cares if it’s not the stuff you want?
similarly, it does not make sense to consciously optimize for having lots of long-term descendants. I love my children; I expect they’ll love their children; but go too many generations out and it’s straight-up fantasyland. My great-grandparents would have hated me. And that’s still a lot of shared culture and values! Do you really have that much in common with anyone from five thousand years ago?
Evolution is not your friend. God is not your friend. Everything worth loving will almost certainly perish. Did you expect it to last forever?
“I love whatever is best at surviving” or “I love whatever is strongest” means you don’t actually care what it’s like. It means you have no loyalty and no standards. It means you don’t care so much if the way things turn out is hideous, brutal, miserable, abusive… so long as it technically “is alive” or “wins”. Fuck that.
I despise sour grapes. If the thing I want isn’t available, I’m not going to pretend that what is available is what I want.
I am not going to embrace the “realistic” plan of allying with something detestable but potent. There is always an alternative, even if the only alternative is “stay true to your dreams and then get clobbered.”
it’s wrong to try to control people or stop them from doing locally self-interested & non-violent things in the interest of “humanity’s future”, in part because this is so futile.
if the only way we survive is if we coerce people to make a costly and painful investment in a speculative idea that might not even work, then we don’t survive! you do not put people through real pain today for a “someday maybe!” This applies to climate change, AI x-risk, and socially-conservative cultural reform.
How does “this is so futile” square with the massive success of taxes and criminal justice? From what I’ve heard, states have managed to reduce murder rates by 50x. Obviously that’s stopping people from something violent rather than non-violent, but what’s the aspect of violence that makes it relevant? Or e.g. how about taxes which fund change to renewable energy? The main argument for socially-conservative cultural reform is fertility, but what about taxes that fund kindergartens, they sort of seem to have a similar function?
The key trick to make it correct to try to control people or stop them is to be stronger than them.
I think this prompts some kind of directional update in me. My paraphrase of this is:
it’s actually pretty ridiculous to think you can steer the future
It’s also pretty ridiculous to choose to identify with what the future is likely to be.
Therefore…. Well, you don’t spell out your answer. My answer is “I should have a personal meaning-making resolution to ‘what would I do if those two things are both true,’ even if one of them turns out to be false, so that I can think clearly about whether they are true.”
I’ve done a fair amount of similar meaningmaking work through the lens of Solstice 2022 and 2023. But that was more through lens of ‘nearterm extinction’ than ‘inevitability of value loss’, which does feel like a notably different thing.
So it seems worth doing some thinking and pre-grieving about that.
I of course have some answers to ‘why value loss might not be inevitable’, but it’s not something I’ve yet thought about through an unclouded lens.
Therefore, do things you’d be in favor of having done even if the future will definitely suck. Things that are good today, next year, fifty years from now… but not like “institute theocracy to raise birth rates”, which is awful today even if you think it might “save the world”.
I honestly feel that the only appropriate response is something along the lines of “fuck defeatism”[1].
This comment isn’t targeted at you, but at a particular attractor in thought space.
Let me try to explain why I think rejecting this attractor is the right response rather than engaging with it.
I think it’s mostly that I don’t think that talking about things at this level of abstraction is useful. It feels much more productive to talk about specific plans. And if you have a general, high-abstraction argument that plans in general are useless, but I have a specific argument why a specific plan is useful, I know which one I’d go with :-).
Don’t get me wrong, I think that if someone struggles for a certain amount of time to try to make a difference and just hits wall after wall, then at some point they have to call it. But “never start” and “don’t even try” are completely different.
It’s also worth noting, that saving the world is a team sport. It’s okay to pursue a plan that depends on a bunch of other folk stepping up and playing their part.
What about influencing? If, in order for things to go OK, human civilization must follow a narrow path which I individually need to steer us down, we’re 100% screwed because I can’t do that. But I do have some influence. A great deal of influence over my own actions (I’m resisting the temptation to go down a sidetrack about determinism, assuming you’re modeling humans as things that can make meaningful choices), substantial influence over the actions of those close to me, some influence over my acquaintances, and so on until very extremely little (but not 0) influence over humanity as a whole. I also note that you use the word “we”, but I don’t know who the “we” is. Is it everyone? If so, then everyone collectively has a great deal of say about how the future will go, if we collectively can coordinate. Admittedly, we’re not very good at this right now, but there are paths to developing this civilizational skill further than we currently have. So maybe the answer to “we can’t steer the future” is “not yet we can’t, at least not very well”?
it’s wrong to try to control people or stop them from doing locally self-interested & non-violent things in the interest of “humanity’s future”, in part because this is so futile.
if the only way we survive is if we coerce people to make a costly and painful investment in a speculative idea that might not even work, then we don’t survive! you do not put people through real pain today for a “someday maybe!” This applies to climate change, AI x-risk, and socially-conservative cultural reform.
Agree, mostly. The steering I would aim for would be setting up systems wherein locally self-interested and non-violent things people are incentivized to do have positive effects for humanity’s future. In other words, setting up society such that individual and humanity-wide effects are in the same direction with respect to some notion of “goodness”, rather than individual actions harming the group, or group actions harming or stifling the individual. We live in a society where we can collectively decide the rules of the game, which is a way of “steering” a group. I believe we should settle on a ruleset where individual short-term moves that seem good lead to collective long-term outcomes that seem good. Individual short-term moves that clearly lead to bad collective long-term outcomes should be disincentivized, and if the effects are bad enough then coercive prevention does seem warranted (E. G., a SWAT team to prevent a mass shooting). And similarly for groups stifling individuals ability to do things that seem to them to be good for them in the short term. And rules that have perverse incentive effects that are harmful to the individual, the group, or both? Definitely out. This type of system design is like a haiku—very restricted in what design choices are permissible, but not impossible in principle. Seems worth trying because if successful, everything is good with no coercion. If even a tiny subsystem can be designed (or the current design tweaked) in this way, that by itself is good. And the right local/individual move to influence the systems of which you are a part towards that state, as a cognitively-limited individual who can’t hold the whole of complex systems in their mind and accurately predict the effect of proposed changes out into the far future, might be as simple as saying “in this instance, you’re stifling the individual” and “in this instance you’re harming the group/long-term future” wherever you see it, until eventually you get a system that does neither. Like arriving at a haiku by pointing out every time the rules of haiku construction are violated.
I disagree a lot! Many things have gotten better! Is sufferage, abolition, democracy, property rights etc not significant? All the random stuff eg better angels of our nature claims has gotten better.
Either things have improved in the past or they haven’t, and either people trying to “steer the future” in some sense have been influential on these improvements. I think things have improved, and I think there’s definitely not strong evidence that people trying to steer the future was always useless. Because trying to steer the future is very important and motivating, i try to do it.
Yes the counterfactual impact of you individually trying to steer the future may or may not be insignificant, but people trying to steer the future is better than no one doing that!
“I love whatever is best at surviving” or “I love whatever is strongest” means you don’t actually care what it’s like. It means you have no loyalty and no standards. It means you don’t care so much if the way things turn out is hideous, brutal, miserable, abusive… so long as it technically “is alive” or “wins”. Fuck that.
Proposal: For any given system, there’s a destiny based on what happens when it’s developed to its full extent. Sight is an example of this, where both human eyes and octopus eyes and cameras have ended up using lenses to steer light, despite being independent developments.
“I love whatever is the destiny” is, as you say, no loyalty and no standards. But, you can try to learn what the destiny is, and then on the basis of that decide whether to love or oppose it.
Plants and solar panels are the natural destiny for earthly solar energy. Do you like solarpunk? If so, good news, you can love the destiny, not because you love whatever is the destiny, but because your standards align with the destiny.
1) Regarding tiling the universy with computronium as destiny is Gnostic heresy.
2) I would like to learn more about the ecology of space infrastructure. Intuitively it seems to me like the Earth is much more habitable than anywhere else, and so I would expect sarah’s “this is so futile” point to actually be inverted when it comes to e.g. a Dyson sphere, where the stagnation-inducing worldwide regulation regulation will by-default be stronger than the entropic pressure.
More generally, I have a concept I call the “infinite world approximation”, which I think held until ~WWI. Under this approximation, your methods have to be robust against arbitrary adversaries, because they could invade from parts of the ecology you know nothing about. However, this approximation fails for Earth-scale phenomena, since Earth-scale organizations could shoot down any attempt at space colonization.
I would more say the opposite: Henri Bergson (better known for inventing vitalism) convinced me that there ought to be a simple explanation for the forms life takes, and so I spent a while performing root cause analysis on that, and ended up with the sun as the creator.
This post reads like it’s trying to express an attitude or put forward a narrative frame, rather than trying to describe the world.
Many of these claims seem obviously false, if I take them at face value at take a moment to consider what they’re claiming and whether it’s true.
e.g., On the first two bullet points it’s easy to come up with counterexamples. Some successful attempts to steer the future, by stopping people from doing locally self-interested & non-violent things, include: patent law (“To promote the progress of science and useful arts, by securing for limited times to authors and inventors the exclusive right to their respective writings and discoveries”) and banning lead in gasoline. As well as some others that I now see that other commenters have mentioned.
history clearly teaches us that civilizations and states collapse (on timescales of centuries) and the way to bet is that ours will as well, but it’s kind of insane hubris to think that this can be prevented;
It seems like it makes some difference whether our civilization collapses the way that the Roman Empire collapsed, the way that the British Empire collapsed, or the way that the Soviet Union collapsed. “We must prevent our civilization from ever collapsing” is clearly an implausible goal, but “we should ensure that a successor structure exists and is not much worse than what we have now” seems rather more reasonable, no?
I don’t think it was articulated quite right—it’s more negative than my overall stance (I wrote it when unhappy) and a little too short-termist.
I do still believe that the future is unpredictable, that we should not try to “constrain” or “bind” all of humanity forever using authoritarian means, and that there are many many fates worse than death and we should not destroy everything we love for “brute” survival.
And, also, I feel that transience is normal and only a bit sad. It’s good to save lives, but mortality is pretty “priced in” to my sense of how the world works. It’s good to work on things that you hope will live beyond you, but Dark Ages and collapses are similarly “priced in” as normal for me. Sara Teasdale: “You say there is no love, my love, unless it lasts for aye; Ah folly, there are episodes far better than the play!” If our days are as a passing shadow, that’s not that bad; we’re used to it.
I worry that people who are not ok with transience may turn themselves into monsters so they can still “win”—even though the meaning of “winning” is so changed it isn’t worth it any more.
I do think this comes back to the messages in On Green and also why the post went down like a cup of cold sick—rationality is about winning. Obviously nobody on LW wants to “win” in the sense you describe, but more winning over more harmony on the margin, I think.
The future will probably contain less of the way of life I value (or something entirely orthogonal), but then that’s the nature of things.
I have been having some similar thoughts on the main points here for a while and thanks for this.
I guess to me what needs attention is when people do things along the lines of “benefit themselves and harm other people”. That harm has a pretty strict definition, though I know we may always be able to give borderline examples. This definitely includes the abuse of power in our current society and culture, and any current risks etc. (For example, if we are constraining to just AI with warning on content, https://www.iwf.org.uk/media/q4zll2ya/iwf-ai-csam-report_public-oct23v1.pdf. And this is very sad to see.) On the other hand, with regards to climate change (can also be current too) or AI risks, it probably should also be concerned when corporates or developers neglect known risks or pursue science/development irresponsibly. I think it is not wrong to work on these, but I just don’t believe in “do not solve the other current risks and only work on future risks.”
On some comments that were saying our society is “getting better”—sure, but the baseline is a very low bar (slavery for example). There are still many, many, many examples in different societies of how things are still very systematically messed up.
You seem to dislike reality. Could it not be that the worldview which clashes with reality is wrong (or rather, in the wrong), rather than reality being wrong/in the wrong? For instance that “nothing is forever” isn’t a design flaw, but one of the required properties that a universe must have in order to support life?
“fair-weather friends” who are only nice to you when it’s easy for them, are not true friends at all
if you don’t have the courage/determination to do the right thing when it’s difficult, you never cared about doing the right thing at all
if you sometimes engage in motivated cognition or are sometimes intellectually lazy/sloppy, then you don’t really care about truth at all
if you “mean well” but don’t put in the work to ensure that you’re actually making a positive difference, then your supposed “well-meaning” intentions were fake all along
I can see why people have these views.
if you actually need help when you’re in trouble, then “fair-weather friends” are no use to you
if you’re relying on someone to accomplish something, it’s not enough for them to “mean well”, they have to deliver effectively, and they have to do so consistently. otherwise you can’t count on them.
if you are in an environment where people constantly declare good intentions or “well-meaning” attitudes, but most of these people are not people you can count on, you will find yourself caring a lot about how to filter out the “posers” and “virtue signalers” and find out who’s true-blue, high-integrity, and reliable.
but I think it’s literally false and sometimes harmful to treat “weak”/unreliable good intentions as absolutely worthless.
not all failures are failures to care enough/try hard enough/be brave enough/etc.
sometimes people legitimately lack needed skills, knowledge, or resources!
“either I can count on you to successfully achieve the desired outcome, or you never really cared at all” is a long way from true.
even the more reasonable, “either you take what I consider to be due/appropriate measures to make sure you deliver, or you never really cared at all” isn’t always true either!
some people don’t know how to do what you consider to be due/appropriate measures
some people care some, but not enough to do everything you consider necessary
sometimes you have your own biases about what’s important, and you really want to see people demonstrate a certain form of “showing they care” otherwise you’ll consider them negligent, but that’s not actually the most effective way to increase their success rate
almost everyone has a finite amount of effort they’re willing to put into things, and a finite amount of cost they’re willing to pay. that doesn’t mean you need to dismiss the help they are willing and able to provide.
as an extreme example, do you dismiss everybody as “insufficiently committed” if they’re not willing to die for the cause? or do you accept graciously if all they do is donate $50?
“they only help if it’s fun/trendy/easy/etc”—ok, that can be disappointing, but is it possible you should just make it fun/trendy/easy/etc? or just keep their name on file in case a situation ever comes up where it is fun/trendy/easy and they’ll be helpful then?
it’s harmful to apply this attitude to yourself, saying “oh I failed at this, or I didn’t put enough effort in to ensure a good outcome, so I must literally not care about ideals/ethics/truth/other people.”
like...you do care any amount. you did, in fact, mean well.
you may have lacked skill;
you may have not been putting in enough effort;
or maybe you care somewhat but not as much as you care about something else
but it’s probably not accurate or healthy to take a maximally-cynical view of yourself where you have no “noble” motives at all, just because you also have “ignoble” motives (like laziness, cowardice, vanity, hedonism, spite, etc).
if you have a flicker of a “good intention” to help people, make the world a better place, accomplish something cool, etc, you want to nurture it, not stomp it out as “probably fake”.
your “good intentions” are real and genuinely good, even if you haven’t always followed through on them, even if you haven’t always succeeded in pursuing them.
you don’t deserve “credit” for good intentions equal to the “credit” for actually doing a good thing, but you do deserve any credit at all.
basic behavioral “shaping”—to get from zero to a complex behavior, you have to reward very incremental simple steps in the right direction.
e.g. if you wish you were “nicer to people”, you may have to pat yourself on the back for doing any small acts of kindness, even really “easy” and “trivial” ones, and notice & make part of your self-concept any inclinations you have to be warm or helpful.
“I mean well and I’m trying” has to become a sentence you can say with a straight face. and your good intentions will outpace your skills so you have to give yourself some credit for them.
it may be net-harmful to create a social environment where people believe their “good intentions” will be met with intense suspicion.
it’s legitimately hard to prove that you have done a good thing, particularly if what you’re doing is ambitious and long-term.
if people have the experience of meaning well and trying to do good but constantly being suspected of insincerity (or nefarious motives), this can actually shift their self-concept from “would-be hero” to “self-identified villain”
which is bad, generally
at best, identifying as a villain doesn’t make you actually do anything unethical, but it makes you less effective, because you preemptively “brace” for hostility from others instead of confidently attracting allies
at worst, it makes you lean into legitimately villainous behavior
OTOH, skepticism is valuable, including skepticism of people’s motives.
but it can be undesirable when someone is placed in a “no-win situation”, where from their perspective “no matter what I do, nobody will believe that I mean well, or give me any credit for my good intentions.”
if you appreciate people for their good intentions, sometimes that can be a means to encourage them to do more. it’s not a guarantee, but it can be a starting point for building rapport and starting to persuade. people often want to live up to your good opinion of them.
it may be net-harmful to create a social environment where people believe their “good intentions” will be met with intense suspicion.
The picture I get of Chinese culture from their fiction makes me think China is kinda like this. A recurrent trope was “If you do some good deeds, like offering free medicine to the poor, and don’t do a perfect job, like treating everyone who says they can’t afford medicine, then everyone will castigate you for only wanting to seem good. So don’t do good.” Another recurrent trope was “it’s dumb, even wrong, to be a hero/you should be a villain.” (One annoying variant is “kindness to your enemies is cruelty to your allies”, which is used to justify pointless cruelty.) I always assumed this was a cultural anti-body formed in response to communists doing terrible things in the name of the common good.
… this can actually shift their self-concept from “would-be hero” to “self-identified villain”
which is bad, generally
at best, identifying as a villain doesn’t make you actually do anything unethical, but it makes you less effective, because you preemptively “brace” for hostility from others instead of confidently attracting allies
at worst, it makes you lean into legitimately villainous behavior
Sounds like it’s time for a reboot of the ol’ “join the dark side” essay.
I want to register in advance, I have qualms I’d be interested in talking about. (I think they are at least one level more interesting than the obvious ones, and my relationship with them is probably at least one level more interesting than the obvious relational stance)
there’s a new field of “pan-cancer” where you make (mostly molbio) comparisons across cancers, including vulnerability screens where you use CRISPR or RNAi to knock down each gene and see which ones kill the cancer cells when absent.
https://link.springer.com/content/pdf/10.1186/s13059-023-03020-w.pdf CRISPR and RNAi both have their strengths and weaknesses but if you look at the overlap there are still a bunch of “pan-essential” genes that all cancers need to survive. (do healthy cells also need those, or are they good therapeutic targets? we just don’t know.)
a call to work harder dammit and treat it like a true war on cancer, not a sedate and bureaucratic academic field
a call for more pan-cancer RNAi vulnerability screens
a call to focus on transcription factors as targets, particularly things like Myc and BRD4 that are particularly involved in the transition to metastasis—we don’t yet have any good drug therapies that work well on metastatic cancers
transcription factors are obviously causally upstream of what makes cancer cancer—its invasiveness, its metastatic potential, its evasion of immune surveillance, etc
they are hard to drug though, because they’re in the nucleus, not on the cell surface. but we can start to do hard things now!
cell surface growth factors (think EGFR) are the easiest to target but the associated drugs have unimpressive clinical effects in most patients because targeting growth factors only slows growth, it doesn’t kill cancer cells. usually just slightly delays the inevitable.
a statement of his redox hobbyhorse—ROS is good, ROS is how the body fights cancer, etc.
not sure how to operationalize this as a strategy. it might, as it turns out, be redundant with immunotherapy.
a couple specific targets/mechanisms he thinks deserve more attention—apparently the circadian regulator PER2 is a tumor suppressor. i’m always down for more attention to circadian stuff.
the Halifax Project researched the hypothesis that low-dose combinations of environmental carcinogens might synergistically increase cancer risk:
https://en.wikipedia.org/wiki/CpG_oligodeoxynucleotide this is the inflammatory molecule on bacteria that’s the reason bacterial infections sometimes cause complete regressions of very difficult tumors (like sarcoma—we have no drugs for sarcoma! it’s either surgery or death!). fortunately the immunooncology people are On It and researching this as an immunostimulant.
if you’ve heard of “Coley Toxins”, they’re kind of an alt-med thing with a tantalizing grain of truth—but we don’t need to inject bacteria into tumors any more, we know how they work, we can replicate the effect with well-defined compounds now.
basically this is using the same principle as old-fashioned chemo—hit it in the DNA replication—but with a new target, and with modern structural-biology-based rational drug design to hit the cancer version of the target rather than the healthy-cell type.
would I have guessed there was room for optimism here a priori? no way.
but apparently we have not explored this space sufficiently. now try it with AlphaFold.
I haven’t yet seen many examples of “put the iCasp9 in the cell if-and-only-if the cell has some molecular marker” but that’s the obvious place to go.
you can kinda reduce the drug resistance thing by putting a promotor to increase iCasp9 expression. buddy if this is where we are in 2022 i’m going to predict there is a LOT of potential value in continuing to work out the kinks in this system. get in on the ground floor!
https://www.sciencedirect.com/science/article/abs/pii/S0006291X0302504X if you actually measure what % of ATP comes from glycolysis, cancers cover a wide range, and the distribution overlaps substantially with the distribution of healthy cells. glycolysis dominance is not a distinguishing characteristic of all or even most cancers.
https://pmc.ncbi.nlm.nih.gov/articles/PMC8274378/ mechanical stress is also a natural feature of cancer—tumors get more rigid and experience pressure. in fact this stress can be a trigger for increased proliferation or metastasis, so watch out!
are cancer cells selectively vulnerable to electrical stress? also kinda yeah
https://www.mdpi.com/2072-6694/13/9/2283 “tumor treating fields”, just an oscillating electric field, are actually an approved therapy in glioblastoma that extends life a few months. (not saying much though...glioblastoma is so deadly that it’s easy mode from an FDA standpoint)
i don’t even know man. somebody who knows physics explain this. little nanoelectrodes with some chemical functionalization kill cancer cells? “quantum biological tunneling?” https://www.nature.com/articles/s41565-023-01496-y
cancer cells have depolarized membranes—you can literally distinguish them from healthy cells by voltage alone.
this is a Michael Levin thing. https://pmc.ncbi.nlm.nih.gov/articles/PMC3528107/ you can give a frog a tumor—or make the tumor go away—through manipulating voltage alone! it does not matter what ion channel you use, it’s about the voltage.
something in (some of?) the neutrophils in (some) humans and a cancer-resistant strain of mice can kill cancer, including when transferred. a Zheng Cui research program.
my take is, he’s not an immunologist and modern methods could elucidate the specific clonal population a LOT better than this, but I like the thought process.
eg https://www.cell.com/cell-reports/pdfExtended/S2211-1247(22)00984-6 we can determine the “good guy” neutrophil subpopulation that infiltrates tumors and promotes an anti-tumor immune response: it’s HLA-DR+CD80+CD86+ICAM1+PD-L1-. in metastasis these guys become PD-L1+ and immunosuppressive.
so like...the secret to replicating Zheng Cui’s miracle mice...might be nivolumab?? don’t get me wrong it’s a good drug but this is anticlimactic.
https://www.science.org/doi/abs/10.1126/scitranslmed.3007646 alkylphosphocholine is a type of lipid especially present in cancer cells, across cancer types, via lipid rafts. a synthetic analog has preferential uptake in basically all rodent & human tumors. usable for imaging and radiotherapy.
they found a fusion transcript and a corresponding fusion protein—the root cause
they did the reasonable thing: screen a compound library against tumor samples.
one hit is napabucasin, usually known as a STAT3 inhibitor (but that’s not the mechanism here) but somebody owns it
another was irinotecan. and navitoclax...but navitoclax has platelet toxicity
irinotecan + a BcrX PROTAC is being investigated though
or you can just. shRNA the fusion transcript. that’s a thing you can do now.
apparently Elana wanted to do that in 2013 but her dad said “pshaw RNA breaks down in the body.” now Spinraza is a thing (antisense oligonucleotide.) not to mention the mRNA world. truly these are the days of miracle and wonder.
“neutrality is impossible” is sort-of-true, actually, but not a reason to give up.
even a “neutral” college class (let’s say a standard algorithms & data structures CS class) is non-neutral relative to certain beliefs
some people object to the structure of universities and their classes to begin with;
some people may object on philosophical grounds to concepts that are unquestionably “standard” within a field like computer science.
some people may think “apolitical” education is itself unacceptable.
to consider a certain set of topics “political” and not mention them in the classroom is, implicitly, to believe that it is not urgent to resolve or act on those issues (at least in a classroom context), and therefore it implies some degree of acceptance of the default state of those issues.
our “neutral” CS class is implicitly taking a stand on certain things and in conflict with certain conceivable views. but, there’s a wide range of views, including (I think) the vast majority of the actual views of relevant parties like students and faculty, that will find nothing to object to in the class.
we need to think about neutrality in more relative terms:
what rule are you using, and what things are you claiming it will be neutral between?
what is neutrality anyway and when/why do you want it?
neutrality is a type of tactic for establishing cooperation between different entities.
one way (not the only way) to get all parties to cooperate willingly is to promise they will be treated equally.
this is most important when there is actual uncertainty about the balance of power.
eg the Dutch Republic was the first European polity to establish laws of religious tolerance, because it happened to be roughly evenly divided between multiple religions and needed to unite to win its independence.
a system is neutral towards things when it treats them the same.
there lots of ways to treat things the same:
“none of these things belong here”
eg no religion in “public” or “secular” spaces
is the “public secular space” the street? no-hijab rules?
or is it the government? no 10 Commandments in the courthouse?
“each of these things should get equal treatment”
eg Fairness Doctrine
“we will take no sides between these things; how they succeed or fail is up to you”
e.g. “marketplace of ideas”, “colorblindness”
one can always ask, about any attempt at procedural neutrality:
what things does it promise to be neutral between?
are those the right or relevant things to be neutral on?
to what degree, and with what certainty, does this procedure produce neutrality?
is it robust to being intentionally subverted?
here and now, what kind of neutrality do we want?
thanks to the Internet, we can read and see all sorts of opinions from all over the world. a wider array of worldviews are plausible/relevant/worth-considering than ever before. it’s harder to get “on the same page” with people because they may have come from very different informational backgrounds.
even tribes are fragmented. even people very similar to one another can struggle to synch up and collaborate, except in lowest-common-denominator ways that aren’t very productive.
narrowing things down to US politics, no political tribe or ideology is anywhere close to a secure monopoly. nor are “tribes” united internally.
we have relied, until now, on a deep reserve of “normality”—apolitical, even apathetic, Just The Way Things Are. In the US that means, people go to work at their jobs and get paid for it and have fun in their free time. 90′s sitcom style.
there’s still more “normality” out there than culture warriors tend to believe, but it’s fragile. As soon as somebody asks “why is this the way things are?” unexamined normality vanishes.
to the extent that the “normal” of the recent past was functional, this is a troubling development...but in general the operation of the mind is a good thing!
we just have more rapid and broader idea propagation now.
why did “open borders” and “abolish the police” and “UBI” take off recently? because these are simple ideas with intuitive appeal. some % of people will think “that makes sense, that sounds good” once they hear of them. and now, way more people are hearing those kinds of ideas.
when unexamined normality declines, conscious neutrality may become more important.
conscious neutrality for the present day needs to be aware of the wide range of what people actually believe today, and avoid the naive Panglossianism of early web 2.0.
many people believe things you think are “crazy”.
“democratization” may lead to the most popular ideas being hateful, trashy, or utterly bonkers.
on the other hand, depending on what you’re trying to get done, you may very well need to collaborate with allies, or serve populations, whose views are well outside your comfort zone.
neutrality has things to offer:
a way to build trust with people very different from yourself, without compromising your own convictions;
“I don’t agree with you on A, but you and I both value B, so I promise to do my best at B and we’ll leave A out of it altogether”
a way to reconstruct some of the best things about our “unexamined normality” and place them on a firmer foundation so they won’t disappear as soon as someone asks “why?”
a “system of the world” is the framework of your neutrality: aka it’s what you’re not neutral about.
eg:
“melting pot” multiculturalism is neutral between cultures, but does believe that they should mostly be cosmetic forms of diversity (national costumes and ethnic foods) while more important things are “universal” and shared.
democratic norms are neutral about who will win, but not that majority vote should determine the winner.
scientific norms are neutral about which disputed claims will turn out to be true, but not on what sorts of processes and properties make claims credible, and not about certain well-established beliefs
right now our system-of-the-world is weak.
a lot of it is literally decided by software affordances. what the app lets you do is what there is.
there’s a lot that’s healthy and praiseworthy about software companies and their culture, especially 10-20 years ago. but they were never prepared for that responsibility!
a stronger system-of-the-world isn’t dogmatism or naivety.
were intellectuals of the 20th, the 19th, or the 18th centuries childish because they had more explicit shared assumptions than we do? I don’t think so.
we may no longer consider some of their frameworks to be true
but having a substantive framework at all clearly isn’t incompatible with thinking independently, recognizing that people are flawed, or being open to changing your mind.
“hedgehogs” or “eternalists” are just people who consider some things definitely true.
it doesn’t mean they came to those beliefs through “blind faith” or have never questioned them.
it also doesn’t mean they can’t recognize uncertainty about things that aren’t foundational beliefs.
operating within a strongly-held, assumed-shared worldview can be functional for making collaborative progress, at least when that worldview isn’t too incompatible with reality.
mathematics was “non-rigorous”, by modern standards, until the early 20th century; and much of today’s mathematics will be considered “non-rigorous” if machine-verified proofs ever become the norm. but people were still able to do mathematics in centuries past, most of which we still consider true.
the fact that you can generate a more general framework, within which the old framework was a special case; or in which the old framework was an unprincipled assumption of the world being “nicely behaved” in some sense; does not mean that the old framework was not fruitful for learning true things.
sometimes, taking for granted an assumption that’s not literally always true (but is true mostly, more-or-less, or in the practically relevant cases) can even be more fruitful than a more radically skeptical and general view.
an *intellectual* system-of-the-world is the framework we want to use for the “republic of letters”, the sub-community of people who communicate with each other in a single conversational web and value learning and truth.
that community expanded with the printing press and again with the internet.
it is radically diverse in opinion.
it is not literally universal. not everybody likes to read and write; not everybody is curious or creative. a lot of the “most interesting people in the world” influence each other.
everybody in the old “blogosphere” was, fundamentally, the same sort of person, despite our constant arguments with each other; and not a common sort of person in the broader population; and we have turned out to be more influential than we have ever been willing to admit.
but I do think of it as a pretty big and growing tent, not confined to 300 geniuses or anything like that.
“The” conversation—the world’s symbolic information and its technological infrastructure—is something anybody can contribute to, but of course some contribute more than others.
I think the right boundary to draw is around “power users”—people who participate in that network heavily rather than occasionally.
e.g. not all academics are great innovators, but pretty much all of them are “power users” and “active contributors” to the world’s informational web.
I’m definitely a power user; I expect a lot of my readers are as well.
what do we need to not be neutral about in this context? what belongs in an intellectual system-of-the-world?
another way of asking this question: about what premises are you willing to say, not just for yourself but for the whole world and for your children’s children, “if you don’t accept this premise then I don’t care to speak to you or hear from you, forever?”
clearly that’s a high standard!
I have many values differences with, say, the author of the Epic of Gilgamesh, but I still want to read it. And I want lots of other people to be able to read it! I do not want the mind that created it to be blotted out of memory.
that’s the level of minimal shared values we’re talking about here. What do we have in common with everyone who has an interest in maintaining and extending humanity’s collective record of thought?
lack of barriers to entry is not enough.
the old Web 2.0 idea was “allow everyone to communicate with everyone else, with equal affordances.” This is a kind of “neutrality”—every user account starts out exactly the same, and anybody can make an account.
I think that’s still an underrated principle. “literally anybody can speak to anybody else who wants to listen” was an invention that created a lot of valuable affordances. we forget how painfully scarce information was when that wasn’t true!
the problem is that an information system only works when a user can find the information they seek. And in many cases, what the user is seeking is true information.
mechanisms intended to make high quality information (reliable, accurate, credible, complete, etc) preferentially discoverable, are also necessary
but they shouldn’t just recapitulate potentially-biased gatekeeping.
we want evaluative systems that, at least a priori, an ancient Sumerian could look at and say “yep, sounds fair”, even if the Sumerian wouldn’t like the “truths” that come out on top in those systems.
we really can’t be parochial here. social media companies “patched” the problem of misinformation with opaque, partisan side-taking, and they suffered for it.
how “meta” do we have to get about determining what counts as reliable or valid? well, more meta than just picking a winning side in an ongoing political dispute, that’s for sure.
probably also more “meta” than handpicking certain sources as trustworthy, the way Wikipedia does.
if we want to preserve and extend knowledge, the “republic of letters” needs intentional stewardship of the world’s information, including serious attempts at neutrality.
perceived bias, of course, turns people away from information sources.
nostalgia for unexamined normality—“just be neutral, y’know, like we were when I was young”—is not a credible offer to people who have already found your nostalgic “normal” wanting.
rigorous neutrality tactics—“we have so structured this system so that it is impossible for anyone to tamper with it in a biased fashion”—are better.
this points towards protocols.
h/t Venkatesh Rao
think: zero-knowledge proofs, formal verification, prediction markets, mechanism design, crypto-flavored governance schemes, LLM-enabled argument mapping, AI mechanistic-interpretability and “showing its work”, etc
getting fancy with the technology here often seems premature when the “public” doesn’t even want neutrality; but I don’t think it actually is.
people don’t know they want the things that don’t yet exist.
the people interested in developing “provably”, “rigorously”, “demonstrably” impartial systems are exactly the people you want to attract first, because they care the most.
getting it right matters.
a poorly executed attempt either fizzles instantly; or it catches on but its underlying flaws start to make it actively harmful once it’s widely culturally influential.
OTOH, premature disputes on technology and methods are undesirable.
remember there aren’t very many of you/us. that is:
pretty much everybody who wants to build rigorous neutrality, no matter why they want it or how they want to implement it, is a potential ally here.
the simple fact of wanting to build a “better” world that doesn’t yet exist is a commonality, not to be taken for granted. most people don’t do this at all.
the “softer” side, mutual support and collegiality, are especially important to people whose dreams are very far from fruition. people in this situation are unusually prone to both burnout and schism. be warm and encouraging; it helps keep dreams alive.
also, the whole “neutrality” thing is a sham if we can’t even engage with collaborators with different views and cultural styles.
also, “there aren’t very many of us” in the sense that none of these envisioned new products/tools/institutions are really off the ground yet, and the default outcome is that none of them get there.
you are playing in a sandbox. the goal is to eventually get out of the sandbox.
you will need to accumulate talent, ideas, resources, and vibe-momentum. right now these are scarce, or scattered; they need to be assembled.
be realistic about influence.
count how many people are at the conference or whatever. how many readers. how many users. how many dollars. in absolute terms it probably isn’t much. don’t get pretentious about a “movement”, “community”, or “industry” before it’s shown appreciable results.
the “adjacent possible” people to get involved aren’t the general public, they’re the closest people in your social/communication graph who aren’t yet participating. why aren’t they part of the thing? (or why don’t you feel comfortable going to them?) what would you need to change to satisfy the people you actually know?
this is a better framing than speculating about mass appeal.
even a “neutral” college class (let’s say a standard algorithms & data structures CS class) is non-neutral relative to certain beliefs
Things that many people consider controversial: evolution, sex education, history. But even for mathematical lessons, you will often find a crackpot who considers given topic controversial. (-1)×(-1) = 1? 0.999… = 1?
some people object to the structure of universities and their classes to begin with
In general, unschooling.
In my opinion, the important functionality of schools is: (1) separating reliable sources of knowledge from bullshit, (2) designing a learning path from “I know nothing” to “I am an expert” where each step only requires the knowledge of previous steps, (3) classmates and teachers to discuss the topic with.
Without these things, learning is difficult. If an autodidact stumbles on some pseudoscience in library, even if they later figure out that it was bullshit, it is a huge waste of time. Picking up random books on a topic and finding out that I don’t understand the things they expect me to already know is disappointing. Finding people interested in the same topic can be difficult.
But everything else about education is incidental. No need to walk into the same building. No need to only have classmates of exactly the same age. The learning path doesn’t have to be linear, could be a directed oriented graph. Generally, no need to learn a specific topic at a specific age, although it makes sense to learn the topics that are prerequisites to a lot of knowledge as soon as possible. Grading is incidental; you need some feedback, but IMHO it would be better to split the knowledge into many small pieces, and grade each piece as “you get it” or “you don’t”.
...and the conclusion of my thesis is that a good educational system would focus on the essentials, and be liberal about everything else. However, there are people who object against the very things I consider essential. The educational system that would seem incredible free for me would still seem oppressive to them.
neutrality is a type of tactic for establishing cooperation between different entities.
That means you can have a system neutral towards selected entities (the ones you want in the coalition), but not others. For example, you can have religious tolerance towards an explicit list of churches.
This can lead to a meta-game where some members of the coalition try to kick out someone, because they are no longer necessary. And some members strategically keeping someone in, not necessarily because they love them, but because “if they are kicked out today, tomorrow it could be me; better avoid this slippery slope”.
Examples: Various cults in USA that are obviously destructive but enjoy a lot of legal protection. Leftists establishing an exception for “Nazis”, and then expanding the definition to make it apply to anyone they don’t like. Similarly, the right calling everything they don’t like “communism”. And everyone on internet calling everything “religion”.
“we will take no sides between these things; how they succeed or fail is up to you”
Or the opposite of that: “the world is biased against X, therefore we move towards true neutrality by supporting X”.
is it robust to being intentionally subverted?
So, situations like: the organization is nominally politically neutral, but the human at an important position has political preferences… so far it is normal and maybe unavoidable, but what if there are multiple humans like that, all having the same political preference. If they start acting in a biased way, is it possible for other members to point it out.. without getting accused in turn of “bringing politics” into the organization?
As soon as somebody asks “why is this the way things are?” unexamined normality vanishes.
They can easily create a subreddit r/anti-some-specific-way-things-are and now the opposition to the idea is forever a thing.
a way to reconstruct some of the best things about our “unexamined normality” and place them on a firmer foundation so they won’t disappear as soon as someone asks “why?”
Basically, we need a “FAQ for normality”. The old situation was that people who were interested in a topic knew why things are certain way, and others didn’t care. If you joined the group of people who are interested, sooner or later someone explained it to you in person.
But today, someone can make a popular YouTube video containing some false explanation, and overnight you have tons of people who are suddenly interested in the topic and believe a falsehood… and the people who know how things are just don’t have the capacity to explain that to someone who lacks the fundamentals, believes a lot of nonsense, has strong opinions, and is typically very hostile to someone trying to correct them. So they just give up. But now we have the falsehood established as an “alternative truth”, and the old process of teaching the newcomers no longer works.
The solution for “I don’t have a capacity to communicate to so many ignorant and often hostile people” is to make an article or a YouTube video with an explanation, and just keep posting the link. Some people will pay attention, some people won’t, but it no longer takes a lot of your time, and it protects you from the emotional impact.
There are things for which we don’t have a good article to link, or the article is not known to many. We could fix that. In theory, school was supposed to be this kind of FAQ, but that doesn’t work in a dynamic society where new things happen after you are out of school.
a lot of it is literally decided by software affordances. what the app lets you do is what there is.
Yeah, I often feel that having some kind of functionality would improve things, but the functionality is simply not there.
To some degree this is caused by companies having a monopoly on the ecosystem they create. For example, if I need some functionality for e-mail, I can make an open-source e-mail client that has it. (I think historically spam filters started like this.) If I need some functionality for Facebook… there is nothing I can do about it, other than leave Facebook but there is a problem with coordinating that.
Sometimes this is on purpose. Facebook doesn’t want me to be able to block the ads and spam, because they profit from it.
but having a substantive framework at all clearly isn’t incompatible with thinking independently, recognizing that people are flawed, or being open to changing your mind.
Yeah, if we share a platform, we may start examining some of its assumptions, and maybe at some moment we will collectively update. But if everyone assumes something else, it’s the Eternal September of civilization.
If we can’t agree on what is addition, we can never proceed to discuss multiplication. And we will never build math.
I think the right boundary to draw is around “power users”—people who participate in that network heavily rather than occasionally.
Sometimes this is reflected by the medium. For example, many people post comments on blogs, but only a small part of them writes blogs. By writing a blog you join the “power users”, and the beauty of it is that it is free for everyone and yet most people keep themselves out voluntarily.
(A problem coming soon: many fake “power users” powered by LLMs.)
I have many values differences with, say, the author of the Epic of Gilgamesh, but I still want to read it.
There is a difference between reading for curiosity and reading to get reliable information. I may be curious about e.g. Aristotle’s opinion on atoms, but I am not going to use it to study chemistry.
In some way, I treat some people’s opinions as information about the world, and other people’s opinions as information about them. Both are interesting, but in a different way. It is interesting to know my neighbor’s opinion on astrology, but I am not using this information to update on astrology; I only use it to update on my neighbor.
So I guess I have two different lines: whether I care about someone as a person, and whether I trust someone as a source of knowledge. I listen to both, but I process the information differently.
this points towards protocols.
Thinking about the user experience, I think it would be best if the protocol already came with three default implementations: as a website, as a desktop application, and as a smartphone app.
A website doesn’t require me to install anything; I just create an account and start using it. The downside is that the website has an owner, who can kick me out of the website. Also, I cannot verify the code. A malicious owner could probably take my password (unless we figure out some way to avoid this, that won’t be too inconvenient). Multiple websites talking to each other in a way that is as transparent for the user as possible.
A smartphone app, because that’s what most people use most of the day, especially when they are outside.
A desktop app, because that provides most options for the (technical) power user. For example, it would be nice to keep an offline archive of everything I want, delete anything I no longer want, export and import data.
https://darioamodei.com/machines-of-loving-grace [[AI]] [[biotech]] [[Dario Amodei]] spends about half of this document talking about AI for bio, and I think it’s the most credible “bull case” yet written for AI being radically transformative in the biomedical sphere.
one caveat is that I think if we’re imagining a future with brain mapping, regeneration of macroscopic brain tissue loss, and understanding what brains are doing well enough to know why neurological abnormalities at the cell level produce the psychiatric or cognitive symptoms they do...then we probably can do brain uploading! it’s really weird to single out this one piece as pie-in-the-sky science fiction when you’re already imagining a lot of similarly ambitious things as achievable.
https://venture.angellist.com/eli-dourado/syndicate [[tech industry]] when [[Eli Dourado]] picks startups, they’re at least not boring! i haven’t vetted the technical viability of any of these, but he claims to do a lot of that sort of numbers-in-spreadsheets work.
https://forum.effectivealtruism.org/topics/shapley-values [[EA]] [[economics]] how do you assign credit (in a principled fashion) to an outcome that multiple people contributed to? Shapley values! It seems extremely hard to calculate in practice, and subject to contentious judgment calls about the assumptions you make, but maybe it’s an improvement over raw handwaving.
https://gwern.net/maze [[Gwern Branwen]] digs up the “Mr. Young” studying maze-running techniques in [[Richard Feynman]]’s “Cargo Cult Science” speech. His name wasn’t Young but Quin Fischer Curtis, and he was part of a psychology research program at UMich that published little and had little influence on the outside world, and so was “rebooted” and forgotten. Impressive detective work, though not a story with a very satisfying “moral”.
She’s doing an interesting thing here that I haven’t wrapped my head around. She’s not making the positive case “students today are NOT oversensitive or illiberal” or “trigger warnings are beneficial,” even though she seems to believe both those things. she’s more calling into question “why has this complaint become a common talking point? what unstated assumptions does it perpetuate?” I am not sure whether this is a valid approach that’s alternate to the forms of argument I’m more used to, or a sign of weakness (a thing she’s doing only because she cannot make the positive case for the opposite of what her opponents claim.)
NSAIDS and omega-3 fatty acids prevent 95% of tumors in a tumor-prone mouse strain?!
also we’re targeting [[STAT3]] now?! that’s a thing we’re doing.
([[STAT3]] is a major oncogene but it’s a transcription factor, it lives in the cytoplasm and the nucleus, this is not easy to target with small molecules like a cell surface protein.)
https://en.m.wikipedia.org/wiki/CLARITY [[biotech]] make a tissue sample transparent so you can make 3D microscopic imaging, with contrast from immunostaining or DNA/RNA labels
tl;dr: Hamas consistently wants to destroy Israel and commit violence against Israelis, they say so repeatedly, and there was never going to be a long-term possibility of living peacefully side-by-side with them; Netanyahu is a tough talker but kind of a procrastinator who’s kicked the can down the road on national security issues for his entire career; catering to settlers is not in the best interests of Israel as a whole (they provoke violence) but they are an unduly powerful voting bloc; Palestinian misery is real but has been institutionalized by the structure of the Gazan state and the UN which prevents any investment into a real local economy; the “peace process” is doomed because Israel keeps offering peace and the Palestinians say no to any peace that isn’t the abolition of the State of Israel.
it’s pretty common for reasonable casual observers (eg in America) to see Israel/Palestine as a tragic conflict in which probably both parties are somewhat in the wrong, because that’s a reasonable prior on all conflicts. The more you dig into the details, though, the more you realize that “let’s live together in peace and make concessions to Palestinians as necessary” has been the mainstream Israeli position since before 1948. It’s not a symmetric situation.
concentrated in the [[anterior cingulate cortex]] and [[insular cortex]] which are closely related to the “sense of self” (i.e. interoception, emotional salience, and the perception that your e.g. hand is “yours” and it was “you” who moved it)
she’s more calling into question “why has this complaint become a common talking point? what unstated assumptions does it perpetuate?” I am not sure whether this is a valid approach that’s alternate to the forms of argument I’m more used to, or a sign of weakness
It is good to have one more perspective, and perhaps also good to develop a habit to go meta. So that when someone tells you “X”, in addition to asking yourself “is X actually true?” you also consider questions like “why is this person telling me X?”, “what could they gain in this situation by making me think more about X?”, “are they perhaps trying to distract me from some other Y?”.
Because there are such things as filtered evidence, availability bias, limited cognition; and they all can be weaponized. While you are trying really hard to solve the puzzle the person gave you, they may be using your inattention to pick your pockets.
In extreme cases, it can even be a good thing to dismiss the original question entirely. Like, if you are trying to leave an abusive religious cult, and the leader gives you a list of “ten thousand extremely serious theological questions you need to think about deeply before you make the potentially horrible mistake of damning your soul by leaving this holy group”, you should not actually waste your time thinking about them, but keep planning your escape.
Now the opposite problem is that some people get so addicted to the meta that they are no longer considering the object level. “You say I’m wrong about something? Well, that’s exactly what the privileged X people love to do, don’t they?” (Yeah, they probably do. But there is still a chance that you are actually wrong about something.)
tl;dr—mentioning the meta, great; but completely avoiding the object level, weakness
So, how much meta is the right amount of meta? Dunno, that’s a meta-meta question. At some point you need to follow your intuition and hope that your priors aren’t horribly wrong.
The more you dig into the details, though, the more you realize that “let’s live together in peace and make concessions to Palestinians as necessary” has been the mainstream Israeli position since before 1948. It’s not a symmetric situation.
The situation is not symmetric, I agree. But also, it is too easy to underestimate the impact of the settlers. I mean, if you include them in the picture, then the overall Israeli position becomes more like: “Let’s live together in peace, and please ignore these few guys who sometimes come to shoot your family and take your homes. They are an extremist minority that we don’t approve of, but for complicated political reasons we can’t do anything about them. Oh, and if you try to defend yourself against them, chances are our army might come to defend them. And that’s also something we deeply regret.”
It is much better than the other side, but in my opinion still fundamentally incompatible with peace.
kinda meta, but I find myself wondering if we should handle Roam [[ tag ]] syntax in some nicer way. Probably not but it seems nice if it managed to have no downsides.
It wouldn’t collide with normal Markdown syntax use. (I can’t think of any natural examples, aside from bracket use inside links, like [[editorial comment]](URL), which could be special-cased by looking for the parentheses required for the URL part of a Markdown link.) But it would be ambiguous where the wiki links point to (Sarah’s Roam wiki? English Wikipedia?), and if it pointed to somewhere other than LW2 wiki entries, then it would also be ambiguous with that too (because the syntax is copied from Mediawiki and so the same as the old LW wiki’s links).
And it seems like an overloading special case you would regret in the long run, compared to something which rewrote them into regular links. Adds in a lot of complexity for a handful of uses.
I don’t know how familiar you are with regular expressions but you could do this with a two-pass regular expression search and replace: (I used Emacs regex format, your preferred editor might use a different format. notably, in Emacs [ is a literal bracket but ( is a literal parenthesis, for some reason)
replace “^(https://.? )([[.?]] )*” with “\1″
replace “[[(.*?)]]” with “\1″
This first deletes any tags that occur right after a hyperlink at the beginning of a line, then removes the brackets from any remaining tags.
the man has good taste. like, it’s not blindingly original to appreciate Retro, but it is eminently reasonable.
there’s a lot of moderate-Democrat post-election resignation to the effect of “this is what the country wanted; the median voter is in fact pretty OK with Trump” and “the progressive apparatus was more interested in staying in its comfort zone than winning elections”
I’m also seeing a fair number of women going “ok, sure, there are things to criticize about feminist dogma, but actually I have experienced traditionalist religious mores and they were Not Good”, which I think is a needed corrective these days
https://bibliome.ai/ is a resource for looking up specific genome variants and their references in the literature and open-access databases.
when i click through to references they’re often inaccurate (they are claimed to reference a variant that they do not, in fact, contain) but tbh this is also true of Google Search and Google Scholar when it comes to rare variants.
we have not found a physiological difference between the brains of addicts and non-addicts
people are more likely to get addicted to drugs when their lives are terrible; only focusing on biomedical angles on tackling drug addiction means that it’s not considered “real” drug-addiction work to try to improve underlying social problems like poverty or injustice
in particular drug-war policies are often part of the problem, and biomedical addiction research can’t critique laws
https://www.science.org/doi/abs/10.1126/science.abb5920 this one didn’t make the cutoff for my success-story post (only 1/10 patients had a CR) but it’s astonishing that it does anything at all; a fecal matter transplant resulted in a complete response (and two partial responses) upon reintroduction of PD1 immunotherapy, in metastatic melanoma patients who had failed it before.
i am so disillusioned with FMTs that i might still chalk this up to a fluke, but who knows
really high complete response rates in metastatic cancers almost only occur when you have a topical/intratumoral/etc treatment physically localized to the tumor, frequently using an innate-immune mechanism.
that’s also the literal majority of all historical cases of spontaneous tumor regressions—they tend to happen when there’s an infection at the tumor site, causing a powerful (innate! fever, inflammation, sepsis!) immune reaction.
the innate immune system is potent, and it is nasty, which is why you want to confine it.
immune checkpoint inhibitors are real good for metastatic cancer:
https://link.springer.com/article/10.1245/s10434-018-07143-4 isolated limb perfusion for melanoma: get higher doses of chemo into the tumor than the patient could survive otherwise, by cutting off circulation to the limb. when this sort of thing is possible, it really, really works.
https://link.springer.com/article/10.1007/s10549-022-06678-1 I hate on growth factor-targeted therapies a lot, but there are exceptions. Herceptin is a real drug. Look at this. 69 HER2+ patients presenting with metastatic breast cancer and treated with trastuzumab as part of their initial treatment, 54% get a complete response. 41% survived 5+ years after diagnosis. This is really, really solid.
electrochemotherapy is injecting tumors with cytotoxic drugs and electroporating the tumor so the drugs get in better.
It’s only possible when you can physically access the tumor, i.e. when it’s on the skin or when you’re operating anyway (but can’t surgically remove the tumor, because if you could, you would just do that).
if you can prove your computer program does what it’s supposed to—for almost any reasonable interpretation of “what it’s supposed to”—you will, as a side effect, also prove it doesn’t have common security flaws like buffer overflows.
people I looked up while reading Neal Stephenson’s Baroque Cycle:
I wouldn’t say “not a Bayesian” because there’s nothing wrong with Bayes’ Rule and I don’t like the tribal connotations, but lbr, we don’t literally use Bayes’ rule very often and when we do it often reveals just how much our conclusions depend on problem framing and prior assumptions. A lot of complexity/ambiguity necessarily “lives” in the part of the problem that Bayes’ rule doesn’t touch. To be fair, I think “just turn the crank on Bayes’ rule and it’ll solve all problems” is a bit of a strawman—nobody literally believes that, do they? -- but yeah, sure, happy to admit that most of the “hard part” of figuring things out is not the part where you can mechanically apply probability.
https://www.lesswrong.com/posts/YZvyQn2dAw4tL2xQY/rationalists-are-missing-a-core-piece-for-agent-like [[tailcalled]] this one is actually interesting and novel; i’m not sure what to make of it. maybe literal physics, with like “forces”, matters and needs to be treated differently than just a particular pattern of information that you could rederive statistically from sensory data? I kind of hate it but unlike tailcalled I don’t know much about physics-based computational models...[[philosophy]]
https://alignbio.org/ [[biology]] [[automation]] datasets generated by the Emerald Cloud Lab! [[Erika DeBenedectis]] project. Seems cool!
when a mouse trapped in water stops struggling, that is not “despair” or “learned helplessness.” these are anthropomorphisms. the mouse is in fact helpless, by design; struggling cannot save it; immobility is adaptive.
in fact, mice become immobile faster when they have more experience with the test. they learn that struggling is not useful and they retain that knowledge.
also, a mouse in an acute stress situation is not at all like a human’s clinical depression, which develops gradually and persists chronically.
https://en.wikipedia.org/wiki/Copy_Exactly! [[semiconductors]] the Wiki doesn’t mention that Copy Exactly was famously a failure. even when you try to document procedures perfectly and replicate them on the other side of the world, at unprecedented precision, it is really really hard to get the same results.
https://neuroscience.stanford.edu/research/funded-research/optimization-african-killifish-platform-rapid-drug-screening-aggregate [[biology]] you know what’s cool? building experimentation platforms for novel model organisms. Killifish are the shortest-lived vertebrate—which is great if you want to study aging. they live in weird oxygen-poor freshwater zones that are hard to replicate in the lab. figuring out how to raise them in captivity and standardize experiments on them is the kind of unsung, underfunded accomplishment we need to celebrate and expand WAY more.
https://www.nature.com/articles/513481a [[biology]] [[drug discovery]] ever heard of curcumin doing something for your health? resveratrol? EGCG? those are all natural compounds that light up a drug screen like a Christmas tree because they react with EVERYTHING. they are not going to work on your disease in real life.
they’re called PAINs, pan-assay interference compounds, and if you’re not a chemist (or don’t consult one) your drug screen is probably full of ’em. false positives on academic drug screens (Big Pharma usually knows better) are a scourge. https://en.wikipedia.org/wiki/Pan-assay_interference_compounds
https://substack.com/home/post/p-149791027 [[archaeology]] it was once thought that Gobekli Tepe was a “festival city” or religious sanctuary, where people visited but didn’t live, because there wasn’t a water source. Now, they’ve found something that looks like water cisterns, and they suspect people did live there.
I don’t like the framing of “hunter-gatherer” = “nomadic” in this post.
We keep pushing the date of agriculture farther back in time. We keep discovering that “hunter-gatherers” picking plants in “wild” forests are actually doing some degree of forest management, planting seeds, or pulling undesirable weeds. Arguably there isn’t a hard-and-fast distinction between “gathering” and “gardening”. (Grain agriculture where you use a plow and completely clear a field for planting your crop is qualitatively different from the kind of kitchen-garden-like horticulture that can be done with hand tools and without clearing forests. My bet is that all so-called hunter-gatherers did some degree of horticulture until proven otherwise, excepting eg arctic environments)
what the water actually suggests is that people lived at Gobekli Tepe for at least part of the year. it doesn’t say what they were eating.
everybody want to test rats in mazes, ain’t nobody want to test this janky-ass maze!
One of the interesting things I found when I finally tracked down the source is that one of the improved mazes before that was a 3D maze where mice had to choose vertically, keeping them in the same position horizontally, because otherwise they apparently were hearing some sort of subtle sound whose volume/direction let them gauge their position and memorize the choice. So Hunter created a stack of T-junctions, so each time they were another foot upwards/downwards, but at the same point in the room and so the same distance away from the sound source.
he thinks Kamala Harris was an “empty shell” and unlikable and he felt the campaign was manipulative and deceptive.
he didn’t like that she seemed to be a “DEI hire”, but doesn’t have a problem with black or female candidates generally, it’s just that he resents cynical demographic box-checking.
this is a coherent POV—he did vote for Obama, after all. and plenty of people are like “I want the best person regardless of demographics, not a person chosen for their demographics.”
hm. why doesn’t it seem natural to portray Obama as a “DEI hire”? his campaign made a bigger deal about race than Harris’s, and he was criticized a lot for inexperience.
One guess: it’s laughable to think Obama was chosen by anyone besides himself. He was not the Democratic Party’s anointed—that was Hillary. He’s clearly an ambitious guy who wanted to be president on his own initiative and beat the odds to get the nomination. He can’t be a “DEI hire” because he wasn’t a hire at all.
another guess: Obama is clearly smart, speaks/writes in complete sentences, and welcomes lots of media attention and talks about his policies, while Harris has a tendency towards word salad, interviews poorly, avoids discussing issues, etc.
another guess: everyone seems to reject the idea that people prefer male to female candidates, but I’m still really not sure there isn’t a gender effect! This is very vibes-based on my part, and apparently the data goes the other way, so very uncertain here.
Seems to me that Obama had the level of charisma that Hillary did not. (Neither do Biden or Harris). Bill Clinton had charisma, too. (So did Bernie.)
Also, imagine that you had a button that would make everyone magically forget about the race and gender for a moment. I think that the people who voted for Obama would still feel the same, but the people who voted for Hillary would need to think hard about why, and probably their only rationalization would be “so that Trump does not win”.
I am not an American, so my perception of American elections is probably extremely unrepresentative, but it felt like Obama was about “hope” and “change”, while Hillary was about “vote for Her, because she is a woman, so she deserves to be the president”.
I’m still really not sure there isn’t a gender effect!
I guess there are people (both men and women) who in principle wouldn’t vote for a woman leader. But there are also people who would be happy to give a woman a chance. Not sure which group is larger.
But the wannabe woman leader should not make her campaign about her being a woman. That feels like admitting that she has no other interesting qualities. She needs to project the aura of a competent person who just happens to be female.
In my country, I have voted for a woman candidate twice (1, 2), but they never felt like “DEI hires”. One didn’t have any woke agenda, the other was pro- some woke topics, but she never made them about her. (It was like “this is what I will support if you elect me”, not “this is what I am”.)
i have tended to think that the stuff with “intellectual-glamour” or “visionary” branding is actually pretty close to on-target. not always right, of course, often overhyped, but often still underinvested in even despite being highly hyped.
(a surprising number of famous scientists are starved for funding. a surprising number of inventions featured on TED, NYT, etc were never given resources to scale.)
I also am literally unconvinced that “Europe’s kindergarten” was less sophisticated than our own time! but it seems like a fine debate to have at leisure, not totally sure how it would play out.
he’s basically been proven right that energy has moved “underground” but that’s not a mode i can work very effectively in. if you have to be invited to participate, well, it’s probably not going to happen for me.
at the institutional level, he’s probably right that it’s wise to prepare for bad times and not get complacent. again, this was 2019; a lot of the bad times came later. i miss the good times; i want to believe they’ll come again.
https://www.celinehh.com/aging-field Celine Halioua on what the aging field needs—notably, more biotech companies that are prepared to run their own clinical trials specifically for aging-related endpoints.
a typical new biotech company never runs its own clinical trials—they license, partner, or get bought by pharma. but pharma’s not that into aging (yet) and nobody really has expertise in running aging-focused clinical trials, so that may need to happen first in a startup context. which means some investors have to be willing to put up more cash than usual....
it’s a “brown-yellow” pigmented substance (first observed under the microscope in the 19th century) that accumulates in post-mitotic cells with age.
it’s not one substance; it’s a mixture of “garbage” (mostly protein and lipid) that accumulates around the lysosome but can’t be disposed of through exocytosis.
it’s “autofluorescent”—it fluoresces in various wavelengths of light without being stained.
it accumulates more under conditions of oxidative stress like high-oxygen environments or in the presence of iron (which catalyzes oxidation reactions); it accumulates less in the presence of antioxidants and under caloric restriction.
evidence that lipofuscin accumulation causes disease or dysfunction seems a lot shakier in this paper.
I was a little self-conscious about her dissatisfaction with “San Francisco courtier culture”—of course she’s much better at the hustle than I ever was, but I actually love it. If anything, I’ve more often felt hurt that so many people I know got sick of the game before I ever really figured out how to play it.
https://dafny.org/ ”Dafny is a verification-aware programming language that has native support for recording specifications and is equipped with a static program verifier.”
Dafny’s formal verification is based on automated SMT solvers; compared to proof assistants like Coq/Lean/etc it’s less powerful
Dafny can be compiled to familiar languages such as such as C#, Java, JavaScript, Go and Python
wow. this is a very close parallel (and historically contemporaneous) with the conquistadors and privateers of England, Spain, and Portugl in the Age of Exploration...except we don’t make movies and novels about it in the West. But the swashbuckling potential is amazing.
I’ll kind of give him Kipling and Cummings; those are genuine anti-communist, anti-monarchical-absolutism, and anti-war sentiments. Yeats is doing a different thing; I love him but he is Not Our Friend.
their latest album Only God Was Above Us is wrenching and it’s kind of getting to me lately.
most of the commentary in interviews is about how Koening, now 40 with a 5-year-old kid, has matured and found peace (though if you listen to the lyrics it’s an extremely nihilistic sort of being “at peace” with a terrible world and giving up on trying to change it)
nobody is remarking on what I see as pretty explicit themes like:
last album’s “Harmony Hall” was about a sense of betrayal regarding Ivy-League antisemitism
this album is pretty clearly a rejection of the backlash, the Gen-X (“Gen X Cops”), ex-Eastern-Bloc (“Pravda”), or specifically Jewish (in the [[Bari Weiss]]/Tablet-mag vein) “vibe shift”.
there’s a lot of reflection on heritage and generation gaps, there’s the sense that someone (his elders? his family?) is pushing him in a direction and he doesn’t want to go that way, he thinks it doesn’t make sense in his generation, in this era, but he does care enough to be conflicted and to yearn over the pain of people still (mistakenly, he thinks) struggling (“Capricorn”).
https://endpts.com/biotech-industry-worries-over-potential-for-rfk-jr-ally-as-fda-pick/ Casey Means has been floated as the new pick for FDA head; apparently she’s expressed concerns about vaccines and over-medication on the Joe Rogan podcast and has written a book about how most chronic diseases can be prevented by healthy lifestyles (which probably overstates the case)
“FRET is a non-radiative transfer of energy from an excited donor fluorophore molecule to a nearby acceptor fluorophore molecule...When the biomolecule of interest is present, it can cause a change in the distance between the donor and acceptor, leading to a change in the efficiency of FRET and a corresponding change in the fluorescence intensity of the acceptor. This change in fluorescence can be used to detect and quantify the biomolecule of interest.”
advantages:
real-time
non-destructive
sensitive to very low concentrations (picomolar and nanomolar)
highly specific because it detects conformational changes in biological molecules
this article is from a not-great journal and the author clearly does not have English as a first language… at some point i will need a more reputable source, this was from googling FRET quickly
https://www.astralcodexten.com/p/the-case-against-proposition-36 Clara Collier gives the narrow, evidence-based case that shorter jail sentences didn’t cause California’s property crime wave or drug overdose death epidemic, and longer jail sentences won’t fix those problems
I’m pretty convinced but I don’t follow this topic in great detail
metastatic malignant peripheral nerve sheath tumor is pretty bad—median survival is only 8 months after metastases are detected. but one M.O. that seems to help in several case studies is “sequence the tumor, find a mutation, use a drug that’s approved for other cancer types with the same mutation.”
PD-L1 overexpression? use a PD-1 inhibitor! checkpoint immunotherapy stays winning.
chemo is...not great but better than nothing. some partial responses, no complete responses, survival extended by maybe a few months. mostly it seems best to have doxorubicin in the mix.
mech-interp seems like straightforwardly real and good work from a variety of perspectives on AI. helps with many risk scenarios including some x-risk scenarios; helps make the technology stronger & more reliable, which is good for the industry in the long run.
I...guess this isn’t wrong, but it’s a kind of Take I’ve never been able to relate to myself. Maybe it’s because I found Legit True Love at age 22, but I’ve never had that feeling of “oh no the men around me are too weak-willed” (not in my neck of the woods they’re not!) or “ew they’re too interested in going to the gym” (gym rats are fine? it’s a hobby that makes you good-looking, I’m on board with this) or “they’re not attentive and considerate enough” (often a valid complaint, but typically I’m the one who’s too hyperfocused on my own work & interests) or “they’re too show-offy” (yeah it’s irritating in excess but a little bit of show-off energy is enlivening).
Look: you like Tony Soprano because he’s competent and lives by a code? But you don’t like it when a real-life guy is too competitive, intense, or off doing his own thing? I’m sorry, but that’s not how things work.
Tony Soprano can be light-hearted and always have time for the women around him because he is a fictional character. In real life, being good at stuff takes work and is sometimes stressful.
My husband is, in fact, very close to this “Tony Soprano” ideal—assertive, considerate, has “boyish charm”, lives by a “code”, is competent at lots of everyday-life things but isn’t too busy for me—and I guarantee you would not have thought to date him because he’s also nerdy and argumentative and wouldn’t fit in with the yuppie crowd.
Also like. This male archetype is a guy who fixes things for you and protects you and makes you feel good. In real life? Those guys get sad that they’re expected to give, give, give and nobody cares about their feelings. I haven’t watched The Sopranos but my understanding is that Tony is in therapy because the strain of this life is getting to him. This article doesn’t seem to have a lot of empathy with what it’s like to actually be Tony...and you probably should, if you want to marry him.
a framework for thinking about aging: “1st gen” is delaying aging, which is where the field started (age1, metformin, rapamycin), while “2nd gen” is pausing (stasis), repairing (reprogramming), or replacing (transplanting), cells/tissues. 2nd gen usually uses less mature technologies (eg cell therapy, regenerative medicine), but may have a bigger and faster effect size.
“function, feeling, and survival” are the endpoints that matter.
biomarkers are noisy and speculative early proxies that we merely hope will translate to a truly healthier life for the elderly. apply skepticism.
Psychotic “delusions” are more about holding certain genres of idea with a socially inappropriate amount of intensity and obsession than holding a false idea. Lots of non-psychotic people hold false beliefs (eg religious people). And, interestingly, it is absolutely possible to hold a true belief in a psychotic way.
I have observed people during psychotic episodes get obsessed with the idea that social media was sending them personalized messages (quite true; targeted ads are real) or the idea that the nurses on the psych ward were lying to them (they were).
Preoccupation with the revelation of secret knowledge, with one’s own importance, with mistrust of others’ motives, and with influencing others’ thoughts or being influenced by other’s thoughts, are classic psychotic themes.
And it can be a symptom of schizophrenia when someone’s mind gets disproportionately drawn to those themes. This is called being “paranoid” or “grandiose.”
But sometimes (and I suspect more often with more intelligent/self-aware people) the literal content of their paranoid or grandiose beliefs is true!
sometimes the truth really has been hidden!
sometimes people really are lying to you or trying to manipulate you!
sometimes you really are, in some ways, important! sometimes influential people really are paying attention to you!
of course people influence each others’ thoughts—not through telepathy but through communication!
a false psychotic-flavored thought is “they put a chip in my brain that controls my thoughts.” a true psychotic-flavored thought is “Hollywood moviemakers are trying to promote progressive values in the public by implanting messages in their movies.”
These thoughts can come from the same emotional drive, they are drawn from dwelling on the same theme of “anxiety that one’s own thoughts are externally influenced”, they are in a deep sense mere arbitrary verbal representations of a single mental phenomenon...
but if you take the content literally, then clearly one claim is true and one is false.
and a sufficiently smart/self-aware person will feel the “anxiety-about-mental-influence” experience, will search around for a thought that fits that vibe but is also true, and will come up with something a lot more credible than “they put a mind-control chip in my brain”, but is fundamentally coming from the same motive.
There’s an analogous but easier to recognize thing with depression.
A depressed person’s mind is unusually drawn to obsessing over bad things. But this obviously doesn’t mean that no bad things are real or that no depressive’s depressing claims are true.
When a depressive literally believes they are already dead, we call that Cotard’s Delusion, a severe form of psychotic depression. When they say “everybody hates me” we call it a mere “distorted thought”. When they talk accurately about the heat death of the universe we call it “thermodynamics.” But it’s all coming from the same emotional place.
In general, mental illnesses, and mental states generally, provide a “tropism” towards thoughts that fit with certain emotional/aesthetic vibes.
Depression makes you dwell on thoughts of futility and despair
Anxiety makes you dwell on thoughts of things that can go wrong
Mania makes you dwell on thoughts of yourself as powerful or on the extreme importance of whatever you’re currently doing
Paranoid psychosis makes you dwell on thoughts of mistrust, secrets, and influencing/being influenced
You can, to some extent, “filter” your thoughts (or the ones you publicly express) by insisting that they make sense. You still have a bias towards the emotional “vibe” you’re disposed to gravitate towards; but maybe you don’t let absurd claims through your filter even if they fit the vibe. Maybe you grudgingly admit the truth of things that don’t fit the vibe but technically seem correct.
this does not mean that the underlying “tropism” or “bias” does not exist!!!
this does not mean that you believe things “only because they are true”!
in a certain sense, you are doing the exact same thing as the more overtly irrational person, just hiding it better!
the “bottom line” in terms of vibe has already been written, so it conveys no “updates” about the world
the “bottom line” in terms of details may still be informative because you’re checking that part and it’s flexible
“He’s not wrong but he’s still crazy” is a valid reaction to someone who seems to have a mental-illness-shaped tropism to their preoccupations.
eg if every post he writes, on a variety of topics, is negative and gloomy, then maybe his conclusions say more about him than about the truth concerning the topic;
he might still be right about some details but you shouldn’t update too far in the direction of “maybe I should be gloomy about this too”
Conversely, “this sounds like a classic crazy-person thought, but I still separately have to check whether it’s true” is also a valid and important move to make (when the issue is important enough to you that the extra effort is worth it).
Just because someone has a mental illness doesn’t mean every word out of their mouth is false!
(and of course this assumption—that “crazy” people never tell the truth—drives a lot of psychiatric abuse.)
link: https://roamresearch.com/#/app/srcpublic/page/71kfTFGmK
I once saw a video on Instagram of a psychiatrist recommending to other psychiatrists that they purchase ear scopes to check out their patients’ ears, because:
1. Apparently it is very common for folks with severe mental health issues to imagine that there is something in their ear (e.g., a bug, a listening device)
2. Doctors usually just say “you are wrong, there’s nothing in your ear” without looking
3. This destroys trust, so he started doing cursory checks with an ear scope
4. Far more often than he expected (I forget exactly, but something like 10-20%ish), there actually was something in the person’s ear—usually just earwax buildup, but occasionally something else like a dead insect—that was indeed causing the sensation, and he gained a clinical pathway to addressing his patients’ discomfort that he had previously lacked
This reminds me of dath ilan’s hallucination diagnosis from page 38 of Yudkowsky and Alicorn’s glowfic But Hurting People Is Wrong.
It’s pretty far from meeting dath ilan’s standard though; in fact an x-ray would be more than sufficient as anyone capable of putting something in someone’s ear would obviously vastly prefer to place it somewhere harder to check, whereas nobody would be capable of defeating an x-ray machine as metal parts are unavoidable.
This concern pops up in books on the Cold War (employees at every org and every company regularly suffer from mental illnesses at somewhere around their base rates, but things get complicated at intelligence agencies where paranoid/creative/adversarial people are rewarded and even influence R&D funding) and an x-ray machine cleanly resolved the matter every time.
Tangential, but...
Schizophrenia is the archetypal definitely-biological mental disorder, but recently for reasons relevant to the above, I’ve been wondering if that is wrong/confused. Here’s my alternate (admittedly kinda uninformed) model:
Psychosis is a biological state or neural attractor, which we can kind of symptomatically characterize, but which really can only be understood at a reductionistic level.
One of the symptoms/consequences of psychosis is getting extreme ideas at extreme amounts of intensity.
This symptom/consequence then triggers a variety of social dynamics that give classic schizophrenic-like symptoms such as, as you say, “preoccupation with the revelation of secret knowledge, with one’s own importance, with mistrust of others’ motives, and with influencing others’ thoughts or being influenced by other’s thoughts”
That is, if you suddenly get an extreme idea (e.g. that the fly that flapped past you is a sign from god that you should abandon your current life), you would expect dynamics like:
People get concerned for you and try to dissuade you, likely even conspiring in private to do so (and even if they’re not conspiring, it can seem like a conspiracy). In response, it might seem appropriate to distrust them.
Or, if one interprets it as them just lacking the relevant information, one needs to develop some theory of why one has access to special information that they don’t.
Or, if one is sympathetic to their concern, it would be logical to worry about one’s thoughts getting influenced.
But these sorts of dynamics can totally be triggered by extreme beliefs without psychosis! This might also be related to how Enneagram type 5 (the rationalist type) is especially prone to schizophrenia-like symptoms.
(When I think “in a psychotic way”, I think of the neurological disorder, but it seems like the way you use it in your comment is more like the schizophrenia-like social dynamic?)
Also tangential, this is sort of a “general factor” model of mental states. That often seems applicable, but recently my default interpretation of factor models has been that they tend to get at intermediary variables and not root causes.
Let’s take an analogy with computer programs. If you look at the correlations in which sorts of processes run fast or slow, you might find a broad swathe of processes whose performance is highly correlated, because they are all predictably CPU-bound. However, when these processes are running slow, there will usually be some particular program that is exhausting the CPU and preventing the others from running. This problematic program can vary massively from computer to computer, so it is hard to predict or model in general, but often easy to identify in the particular case by looking at which program is most extreme.
Thank you, this is interesting and important. I worry that it overstates similarity of different points on a spectrum, though.
In a certain sense, yes. In other, critical senses, no. This is a case where quantitative differences are big enough to be qualitative. When someone is clinically delusional, there are a few things which distinguish it from the more common wrong ideas. Among them, the inability to shut up about it when it’s not relevant, and the large negative impact on relationships and daily life. For many many purposes, “hiding it better” is the distinction that matters.
I fully agree that “He’s not wrong but he’s still crazy” is valid (though I’d usually use less-direct phrasing). It’s pretty rare that “this sounds like a classic crazy-person thought, but I still separately have to check whether it’s true” happens to me, but it’s definitely not never.
I imagine they were obsessed with false versions of this idea, rather than obsession about targeted advertising?
no! it sounded like “typical delusion stuff” at first until i listened carefully and yep that was a description of targeted ads.
For a while I ended up spending a lot of time thinking about specifically the versions of the idea where I couldn’t easily tell how true they were… which I suppose I do think is the correct place to be paying attention to?
One has to be a bit careful with this though. E.g. someone experiencing or having experienced harassment may have a seemingly pathological obsession on the circumstances and people involved in the situation, but it may be completely proportional to the way that it affected them—it only seems pathological to people who didn’t encounter the same issues.
If it’s not serving them, it’s pathological by definition, right?
So obsessing about exactly those circumstances and types of people could be pathological if it’s done more than will protect them in the future, weighing in the emotional cost of all that obsessing.
Of course we can’t just stop patterns of thought as soon as we decide they’re pathological. But deciding it doesn’t serve me so I want to change it is a start.
Yes, it’s proportional to the way it affected them—but most of the effect is in the repetition of thoughts about the incident and fear of future similar experiences. Obsessing about unpleasant events is natural, but it often seems pretty harmful itself.
Trauma is a horrible thing. There’s a delicate balance between supporting someone’s right and tendency to obsess over their trauma while also supporting their ability to quit re-traumatizing themselves by simulating their traumatic event repeatedly.
This seems way too strong, otherwise any kind of belief or emotion that is not narrowly in pursuit of your goals is pathological.
I completely agree that it’s important to strike a balance between revisiting the incident and moving on.
This seems partially wrong. The thoughts are usually consequences of the damage that is done, and they can be unhelpful in their own right, but they are not usually the problem. E.g. if you know that X is an abuser and people don’t believe you, I wouldn’t go so far as saying your mental dissonance about it is the problem.
Some psychiatry textbooks classify “overvalued ideas” as distinct from psychotic delusions.
Depending on how wide you make the definition, a whole rag-bag of diagnoses from the DSM V are overvalued ideas (e.g, anorexia nervosa over valuing being fat).
“we” can’t steer the future.
it’s wrong to try to control people or stop them from doing locally self-interested & non-violent things in the interest of “humanity’s future”, in part because this is so futile.
if the only way we survive is if we coerce people to make a costly and painful investment in a speculative idea that might not even work, then we don’t survive! you do not put people through real pain today for a “someday maybe!” This applies to climate change, AI x-risk, and socially-conservative cultural reform.
most cultures and societies in human history have been so bad, by my present values, that I’m not sure they’re not worse than extinction, and we should expect that most possible future states are similarly bad;
history clearly teaches us that civilizations and states collapse (on timescales of centuries) and the way to bet is that ours will as well, but it’s kind of insane hubris to think that this can be prevented;
the literal species Homo sapiens is pretty resilient and might avoid extinction for a very long time, but have you MET Homo sapiens? this is cold fucking comfort! (see e.g. C. J. Cherryh’s vision in 40,000 in Gehenna for a fictional representation not far from my true beliefs — we are excellent at adaptation and survival but when we “survive” this often involves unimaginable harshness and cruelty, and changing into something that our ancestors would not have liked at all.)
identifying with species-survival instead of with the stuff we value now is popular among the thoughtful but doesn’t make any sense to me;
in general it does not make sense, to me, to compromise on personal values in order to have more power/influence. you will be able to cause stuff to happen, but who cares if it’s not the stuff you want?
similarly, it does not make sense to consciously optimize for having lots of long-term descendants. I love my children; I expect they’ll love their children; but go too many generations out and it’s straight-up fantasyland. My great-grandparents would have hated me. And that’s still a lot of shared culture and values! Do you really have that much in common with anyone from five thousand years ago?
Evolution is not your friend. God is not your friend. Everything worth loving will almost certainly perish. Did you expect it to last forever?
“I love whatever is best at surviving” or “I love whatever is strongest” means you don’t actually care what it’s like. It means you have no loyalty and no standards. It means you don’t care so much if the way things turn out is hideous, brutal, miserable, abusive… so long as it technically “is alive” or “wins”. Fuck that.
I despise sour grapes. If the thing I want isn’t available, I’m not going to pretend that what is available is what I want.
I am not going to embrace the “realistic” plan of allying with something detestable but potent. There is always an alternative, even if the only alternative is “stay true to your dreams and then get clobbered.”
Link to this on my Roam
How does “this is so futile” square with the massive success of taxes and criminal justice? From what I’ve heard, states have managed to reduce murder rates by 50x. Obviously that’s stopping people from something violent rather than non-violent, but what’s the aspect of violence that makes it relevant? Or e.g. how about taxes which fund change to renewable energy? The main argument for socially-conservative cultural reform is fertility, but what about taxes that fund kindergartens, they sort of seem to have a similar function?
The key trick to make it correct to try to control people or stop them is to be stronger than them.
I think this prompts some kind of directional update in me. My paraphrase of this is:
it’s actually pretty ridiculous to think you can steer the future
It’s also pretty ridiculous to choose to identify with what the future is likely to be.
Therefore…. Well, you don’t spell out your answer. My answer is “I should have a personal meaning-making resolution to ‘what would I do if those two things are both true,’ even if one of them turns out to be false, so that I can think clearly about whether they are true.”
I’ve done a fair amount of similar meaningmaking work through the lens of Solstice 2022 and 2023. But that was more through lens of ‘nearterm extinction’ than ‘inevitability of value loss’, which does feel like a notably different thing.
So it seems worth doing some thinking and pre-grieving about that.
I of course have some answers to ‘why value loss might not be inevitable’, but it’s not something I’ve yet thought about through an unclouded lens.
Therefore, do things you’d be in favor of having done even if the future will definitely suck. Things that are good today, next year, fifty years from now… but not like “institute theocracy to raise birth rates”, which is awful today even if you think it might “save the world”.
Ah yeah that’s a much more specific takeaway than I’d been imagining.
I honestly feel that the only appropriate response is something along the lines of “fuck defeatism”[1].
This comment isn’t targeted at you, but at a particular attractor in thought space.
Let me try to explain why I think rejecting this attractor is the right response rather than engaging with it.
I think it’s mostly that I don’t think that talking about things at this level of abstraction is useful. It feels much more productive to talk about specific plans. And if you have a general, high-abstraction argument that plans in general are useless, but I have a specific argument why a specific plan is useful, I know which one I’d go with :-).
Don’t get me wrong, I think that if someone struggles for a certain amount of time to try to make a difference and just hits wall after wall, then at some point they have to call it. But “never start” and “don’t even try” are completely different.
It’s also worth noting, that saving the world is a team sport. It’s okay to pursue a plan that depends on a bunch of other folk stepping up and playing their part.
I would also suggest that this is the best way to respond to depression rather than “trying to argue your way out of it”.
I’m not defeatist! I’m picky.
And I’m not talking specifics because i don’t want to provoke argument.
What about influencing? If, in order for things to go OK, human civilization must follow a narrow path which I individually need to steer us down, we’re 100% screwed because I can’t do that. But I do have some influence. A great deal of influence over my own actions (I’m resisting the temptation to go down a sidetrack about determinism, assuming you’re modeling humans as things that can make meaningful choices), substantial influence over the actions of those close to me, some influence over my acquaintances, and so on until very extremely little (but not 0) influence over humanity as a whole. I also note that you use the word “we”, but I don’t know who the “we” is. Is it everyone? If so, then everyone collectively has a great deal of say about how the future will go, if we collectively can coordinate. Admittedly, we’re not very good at this right now, but there are paths to developing this civilizational skill further than we currently have. So maybe the answer to “we can’t steer the future” is “not yet we can’t, at least not very well”?
Agree, mostly. The steering I would aim for would be setting up systems wherein locally self-interested and non-violent things people are incentivized to do have positive effects for humanity’s future. In other words, setting up society such that individual and humanity-wide effects are in the same direction with respect to some notion of “goodness”, rather than individual actions harming the group, or group actions harming or stifling the individual. We live in a society where we can collectively decide the rules of the game, which is a way of “steering” a group. I believe we should settle on a ruleset where individual short-term moves that seem good lead to collective long-term outcomes that seem good. Individual short-term moves that clearly lead to bad collective long-term outcomes should be disincentivized, and if the effects are bad enough then coercive prevention does seem warranted (E. G., a SWAT team to prevent a mass shooting). And similarly for groups stifling individuals ability to do things that seem to them to be good for them in the short term. And rules that have perverse incentive effects that are harmful to the individual, the group, or both? Definitely out. This type of system design is like a haiku—very restricted in what design choices are permissible, but not impossible in principle. Seems worth trying because if successful, everything is good with no coercion. If even a tiny subsystem can be designed (or the current design tweaked) in this way, that by itself is good. And the right local/individual move to influence the systems of which you are a part towards that state, as a cognitively-limited individual who can’t hold the whole of complex systems in their mind and accurately predict the effect of proposed changes out into the far future, might be as simple as saying “in this instance, you’re stifling the individual” and “in this instance you’re harming the group/long-term future” wherever you see it, until eventually you get a system that does neither. Like arriving at a haiku by pointing out every time the rules of haiku construction are violated.
I disagree a lot! Many things have gotten better! Is sufferage, abolition, democracy, property rights etc not significant? All the random stuff eg better angels of our nature claims has gotten better.
Either things have improved in the past or they haven’t, and either people trying to “steer the future” in some sense have been influential on these improvements. I think things have improved, and I think there’s definitely not strong evidence that people trying to steer the future was always useless. Because trying to steer the future is very important and motivating, i try to do it.
Yes the counterfactual impact of you individually trying to steer the future may or may not be insignificant, but people trying to steer the future is better than no one doing that!
“Let’s abolish slavery,” when proposed, would make the world better now as well as later.
I’m not against trying to make things better!
I’m against doing things that are strongly bad for present-day people to increase the odds of long-run human species survival.
Proposal: For any given system, there’s a destiny based on what happens when it’s developed to its full extent. Sight is an example of this, where both human eyes and octopus eyes and cameras have ended up using lenses to steer light, despite being independent developments.
“I love whatever is the destiny” is, as you say, no loyalty and no standards. But, you can try to learn what the destiny is, and then on the basis of that decide whether to love or oppose it.
Plants and solar panels are the natural destiny for earthly solar energy. Do you like solarpunk? If so, good news, you can love the destiny, not because you love whatever is the destiny, but because your standards align with the destiny.
People who love solarpunk don’t obviously love computronium dyson spheres tho
That is true, though:
1) Regarding tiling the universy with computronium as destiny is Gnostic heresy.
2) I would like to learn more about the ecology of space infrastructure. Intuitively it seems to me like the Earth is much more habitable than anywhere else, and so I would expect sarah’s “this is so futile” point to actually be inverted when it comes to e.g. a Dyson sphere, where the stagnation-inducing worldwide regulation regulation will by-default be stronger than the entropic pressure.
More generally, I have a concept I call the “infinite world approximation”, which I think held until ~WWI. Under this approximation, your methods have to be robust against arbitrary adversaries, because they could invade from parts of the ecology you know nothing about. However, this approximation fails for Earth-scale phenomena, since Earth-scale organizations could shoot down any attempt at space colonization.
Are you saying this because you worship the sun?
I would more say the opposite: Henri Bergson (better known for inventing vitalism) convinced me that there ought to be a simple explanation for the forms life takes, and so I spent a while performing root cause analysis on that, and ended up with the sun as the creator.
This post reads like it’s trying to express an attitude or put forward a narrative frame, rather than trying to describe the world.
Many of these claims seem obviously false, if I take them at face value at take a moment to consider what they’re claiming and whether it’s true.
e.g., On the first two bullet points it’s easy to come up with counterexamples. Some successful attempts to steer the future, by stopping people from doing locally self-interested & non-violent things, include: patent law (“To promote the progress of science and useful arts, by securing for limited times to authors and inventors the exclusive right to their respective writings and discoveries”) and banning lead in gasoline. As well as some others that I now see that other commenters have mentioned.
It seems like it makes some difference whether our civilization collapses the way that the Roman Empire collapsed, the way that the British Empire collapsed, or the way that the Soviet Union collapsed. “We must prevent our civilization from ever collapsing” is clearly an implausible goal, but “we should ensure that a successor structure exists and is not much worse than what we have now” seems rather more reasonable, no?
Is it too much to declare this the manifesto of a new philosophical school, Constantinism?
wait and see if i still believe it tomorrow!
I don’t think it was articulated quite right—it’s more negative than my overall stance (I wrote it when unhappy) and a little too short-termist.
I do still believe that the future is unpredictable, that we should not try to “constrain” or “bind” all of humanity forever using authoritarian means, and that there are many many fates worse than death and we should not destroy everything we love for “brute” survival.
And, also, I feel that transience is normal and only a bit sad. It’s good to save lives, but mortality is pretty “priced in” to my sense of how the world works. It’s good to work on things that you hope will live beyond you, but Dark Ages and collapses are similarly “priced in” as normal for me. Sara Teasdale: “You say there is no love, my love, unless it lasts for aye; Ah folly, there are episodes far better than the play!” If our days are as a passing shadow, that’s not that bad; we’re used to it.
I worry that people who are not ok with transience may turn themselves into monsters so they can still “win”—even though the meaning of “winning” is so changed it isn’t worth it any more.
I do think this comes back to the messages in On Green and also why the post went down like a cup of cold sick—rationality is about winning. Obviously nobody on LW wants to “win” in the sense you describe, but more winning over more harmony on the margin, I think.
The future will probably contain less of the way of life I value (or something entirely orthogonal), but then that’s the nature of things.
I think 2 cruxes IMO dominate the discussion a lot that are relevant here:
Will a value lock-in event happen, especially soon in a way such that once the values are locked in, it’s basically impossible to change values?
Is something like the vulnerable world hypothesis correct about technological development?
If you believed 1 or 2, I could see why people disagreed with Sarah Constantin’s statement on here.
I have been having some similar thoughts on the main points here for a while and thanks for this.
I guess to me what needs attention is when people do things along the lines of “benefit themselves and harm other people”. That harm has a pretty strict definition, though I know we may always be able to give borderline examples. This definitely includes the abuse of power in our current society and culture, and any current risks etc. (For example, if we are constraining to just AI with warning on content, https://www.iwf.org.uk/media/q4zll2ya/iwf-ai-csam-report_public-oct23v1.pdf. And this is very sad to see.) On the other hand, with regards to climate change (can also be current too) or AI risks, it probably should also be concerned when corporates or developers neglect known risks or pursue science/development irresponsibly. I think it is not wrong to work on these, but I just don’t believe in “do not solve the other current risks and only work on future risks.”
On some comments that were saying our society is “getting better”—sure, but the baseline is a very low bar (slavery for example). There are still many, many, many examples in different societies of how things are still very systematically messed up.
You seem to dislike reality. Could it not be that the worldview which clashes with reality is wrong (or rather, in the wrong), rather than reality being wrong/in the wrong? For instance that “nothing is forever” isn’t a design flaw, but one of the required properties that a universe must have in order to support life?
“weak benevolence isn’t fake”: https://roamresearch.com/#/app/srcpublic/page/ic5Xitb70
there’s a class of statements that go like:
“fair-weather friends” who are only nice to you when it’s easy for them, are not true friends at all
if you don’t have the courage/determination to do the right thing when it’s difficult, you never cared about doing the right thing at all
if you sometimes engage in motivated cognition or are sometimes intellectually lazy/sloppy, then you don’t really care about truth at all
if you “mean well” but don’t put in the work to ensure that you’re actually making a positive difference, then your supposed “well-meaning” intentions were fake all along
I can see why people have these views.
if you actually need help when you’re in trouble, then “fair-weather friends” are no use to you
if you’re relying on someone to accomplish something, it’s not enough for them to “mean well”, they have to deliver effectively, and they have to do so consistently. otherwise you can’t count on them.
if you are in an environment where people constantly declare good intentions or “well-meaning” attitudes, but most of these people are not people you can count on, you will find yourself caring a lot about how to filter out the “posers” and “virtue signalers” and find out who’s true-blue, high-integrity, and reliable.
but I think it’s literally false and sometimes harmful to treat “weak”/unreliable good intentions as absolutely worthless.
not all failures are failures to care enough/try hard enough/be brave enough/etc.
sometimes people legitimately lack needed skills, knowledge, or resources!
“either I can count on you to successfully achieve the desired outcome, or you never really cared at all” is a long way from true.
even the more reasonable, “either you take what I consider to be due/appropriate measures to make sure you deliver, or you never really cared at all” isn’t always true either!
some people don’t know how to do what you consider to be due/appropriate measures
some people care some, but not enough to do everything you consider necessary
sometimes you have your own biases about what’s important, and you really want to see people demonstrate a certain form of “showing they care” otherwise you’ll consider them negligent, but that’s not actually the most effective way to increase their success rate
almost everyone has a finite amount of effort they’re willing to put into things, and a finite amount of cost they’re willing to pay. that doesn’t mean you need to dismiss the help they are willing and able to provide.
as an extreme example, do you dismiss everybody as “insufficiently committed” if they’re not willing to die for the cause? or do you accept graciously if all they do is donate $50?
“they only help if it’s fun/trendy/easy/etc”—ok, that can be disappointing, but is it possible you should just make it fun/trendy/easy/etc? or just keep their name on file in case a situation ever comes up where it is fun/trendy/easy and they’ll be helpful then?
it’s harmful to apply this attitude to yourself, saying “oh I failed at this, or I didn’t put enough effort in to ensure a good outcome, so I must literally not care about ideals/ethics/truth/other people.”
like...you do care any amount. you did, in fact, mean well.
you may have lacked skill;
you may have not been putting in enough effort;
or maybe you care somewhat but not as much as you care about something else
but it’s probably not accurate or healthy to take a maximally-cynical view of yourself where you have no “noble” motives at all, just because you also have “ignoble” motives (like laziness, cowardice, vanity, hedonism, spite, etc).
if you have a flicker of a “good intention” to help people, make the world a better place, accomplish something cool, etc, you want to nurture it, not stomp it out as “probably fake”.
your “good intentions” are real and genuinely good, even if you haven’t always followed through on them, even if you haven’t always succeeded in pursuing them.
you don’t deserve “credit” for good intentions equal to the “credit” for actually doing a good thing, but you do deserve any credit at all.
basic behavioral “shaping”—to get from zero to a complex behavior, you have to reward very incremental simple steps in the right direction.
e.g. if you wish you were “nicer to people”, you may have to pat yourself on the back for doing any small acts of kindness, even really “easy” and “trivial” ones, and notice & make part of your self-concept any inclinations you have to be warm or helpful.
“I mean well and I’m trying” has to become a sentence you can say with a straight face. and your good intentions will outpace your skills so you have to give yourself some credit for them.
it may be net-harmful to create a social environment where people believe their “good intentions” will be met with intense suspicion.
it’s legitimately hard to prove that you have done a good thing, particularly if what you’re doing is ambitious and long-term.
if people have the experience of meaning well and trying to do good but constantly being suspected of insincerity (or nefarious motives), this can actually shift their self-concept from “would-be hero” to “self-identified villain”
which is bad, generally
at best, identifying as a villain doesn’t make you actually do anything unethical, but it makes you less effective, because you preemptively “brace” for hostility from others instead of confidently attracting allies
at worst, it makes you lean into legitimately villainous behavior
OTOH, skepticism is valuable, including skepticism of people’s motives.
but it can be undesirable when someone is placed in a “no-win situation”, where from their perspective “no matter what I do, nobody will believe that I mean well, or give me any credit for my good intentions.”
if you appreciate people for their good intentions, sometimes that can be a means to encourage them to do more. it’s not a guarantee, but it can be a starting point for building rapport and starting to persuade. people often want to live up to your good opinion of them.
The picture I get of Chinese culture from their fiction makes me think China is kinda like this. A recurrent trope was “If you do some good deeds, like offering free medicine to the poor, and don’t do a perfect job, like treating everyone who says they can’t afford medicine, then everyone will castigate you for only wanting to seem good. So don’t do good.” Another recurrent trope was “it’s dumb, even wrong, to be a hero/you should be a villain.” (One annoying variant is “kindness to your enemies is cruelty to your allies”, which is used to justify pointless cruelty.) I always assumed this was a cultural anti-body formed in response to communists doing terrible things in the name of the common good.
Sounds like it’s time for a reboot of the ol’ “join the dark side” essay.
I want to register in advance, I have qualms I’d be interested in talking about. (I think they are at least one level more interesting than the obvious ones, and my relationship with them is probably at least one level more interesting than the obvious relational stance)
links 10/25/24: https://roamresearch.com/#/app/srcpublic/page/10-25-2024
https://theoryandpractice.org/2024/10/Yes,%20we%20did%20discover%20the%20Higgs!/ CERN’s statistical methods are good actually. compare this to any other stats-heavy area of natural or social science and they come out impressively rigorous. blinded data analyses? whoa.
https://en.m.wikipedia.org/wiki/Siege_engine a siege engine is any machine you use against the city you’re besieging—from towers to catapults to flamethrowers to artillery.
there’s a new field of “pan-cancer” where you make (mostly molbio) comparisons across cancers, including vulnerability screens where you use CRISPR or RNAi to knock down each gene and see which ones kill the cancer cells when absent.
https://aacrjournals.org/clincancerres/article/24/9/2182/81290/Pan-Cancer-Molecular-Classes-Transcending-Tumor representative paper
https://www.nature.com/articles/s41467-019-13528-0 you can also do it with the proteome
https://www.nature.com/articles/s41422-020-0355-0 you can single-cell it
https://www.science.org/doi/10.1126/science.abe6474 you can profile the TILs cell by cell
https://link.springer.com/content/pdf/10.1186/s13059-023-03020-w.pdf CRISPR and RNAi both have their strengths and weaknesses but if you look at the overlap there are still a bunch of “pan-essential” genes that all cancers need to survive. (do healthy cells also need those, or are they good therapeutic targets? we just don’t know.)
https://www.researchgate.net/profile/Yize-Li/publication/354641293_Moving_pan-cancer_studies_from_basic_research_toward_the_clinic/links/615f5d570bf51d4817512465/Moving-pan-cancer-studies-from-basic-research-toward-the-clinic.pdf?_sg%5B0%5D=started_experiment_milestone&_sg%5B1%5D=started_experiment_milestone&origin=journalDetail by “towards the clinic” we mean “very gingerly”, apparently
https://www.cell.com/cancer-cell/fulltext/S1535-6108(20)30656-5 when you target pan-essential genes you are in the chemo zone, where by default the therapeutic index is low (kills cancer AND healthy cells) and you need to put more work in to handling toxicity.
tumors—most tumors—get coated with IgG, much more than other tissues. is this a systemic defense against cancer or a tumor-secreted IgG? hard to say.
https://www.cell.com/cell/pdf/S0092-8674(22)00192-1.pdf mostly ovarian carcinoma but they also compare to a bunch of other tumor types
https://www.mdpi.com/1422-0067/22/21/11597 could be tumor-derived
https://www.nature.com/articles/srep05088.pdf this is Sanford Simon—endogenous IgG concentrates around mouse tumors of many types
James Watson’s vision for cancer research—this is what he originally became a “controversial figure” for, before the race thing. https://royalsocietypublishing.org/doi/full/10.1098/rsob.120144
basically, this is a few things:
a call to work harder dammit and treat it like a true war on cancer, not a sedate and bureaucratic academic field
a call for more pan-cancer RNAi vulnerability screens
a call to focus on transcription factors as targets, particularly things like Myc and BRD4 that are particularly involved in the transition to metastasis—we don’t yet have any good drug therapies that work well on metastatic cancers
transcription factors are obviously causally upstream of what makes cancer cancer—its invasiveness, its metastatic potential, its evasion of immune surveillance, etc
they are hard to drug though, because they’re in the nucleus, not on the cell surface. but we can start to do hard things now!
cell surface growth factors (think EGFR) are the easiest to target but the associated drugs have unimpressive clinical effects in most patients because targeting growth factors only slows growth, it doesn’t kill cancer cells. usually just slightly delays the inevitable.
a statement of his redox hobbyhorse—ROS is good, ROS is how the body fights cancer, etc.
not sure how to operationalize this as a strategy. it might, as it turns out, be redundant with immunotherapy.
a couple specific targets/mechanisms he thinks deserve more attention—apparently the circadian regulator PER2 is a tumor suppressor. i’m always down for more attention to circadian stuff.
the Halifax Project researched the hypothesis that low-dose combinations of environmental carcinogens might synergistically increase cancer risk:
https://www.ewg.org/research/rethinking-carcinogens
https://www.degruyter.com/document/doi/10.1515/reveh-2020-0033/html
https://en.wikipedia.org/wiki/CpG_oligodeoxynucleotide this is the inflammatory molecule on bacteria that’s the reason bacterial infections sometimes cause complete regressions of very difficult tumors (like sarcoma—we have no drugs for sarcoma! it’s either surgery or death!). fortunately the immunooncology people are On It and researching this as an immunostimulant.
if you’ve heard of “Coley Toxins”, they’re kind of an alt-med thing with a tantalizing grain of truth—but we don’t need to inject bacteria into tumors any more, we know how they work, we can replicate the effect with well-defined compounds now.
https://www.cell.com/cell-chemical-biology/fulltext/S2451-9456(23)00221-0?rss=yes#mmc1 this is AOH1996, the mindblowingly selective new pan-cancer drug candidate.
basically this is using the same principle as old-fashioned chemo—hit it in the DNA replication—but with a new target, and with modern structural-biology-based rational drug design to hit the cancer version of the target rather than the healthy-cell type.
would I have guessed there was room for optimism here a priori? no way.
but apparently we have not explored this space sufficiently. now try it with AlphaFold.
https://www.science.org/content/blog-post/new-mode-cancer-treatment and Derek Lowe is impressed.
https://www.nature.com/articles/s41419-021-04468-z inducible caspase 9 allows conditional apoptosis. it’s incredibly powerful. unfortunately it doesn’t always work and this raises drug resistance concerns.
I haven’t yet seen many examples of “put the iCasp9 in the cell if-and-only-if the cell has some molecular marker” but that’s the obvious place to go.
you can kinda reduce the drug resistance thing by putting a promotor to increase iCasp9 expression. buddy if this is where we are in 2022 i’m going to predict there is a LOT of potential value in continuing to work out the kinks in this system. get in on the ground floor!
https://en.wikipedia.org/wiki/Proteolysis_targeting_chimera a PROTAC is “this protein? kill it.” uses the ubiquitin system.
sadly, the Warburg Effect is not as cool as I once thought.
glucose deprivation is just not that deadly across tumor lines: https://www.sciencedirect.com/science/article/abs/pii/S0899900720300319
https://pmc.ncbi.nlm.nih.gov/articles/PMC3237863/ i mean, 2DG + metformin might do a thing?
https://pmc.ncbi.nlm.nih.gov/articles/PMC5095922/#BST-2016-0094C20 yeah...it’s not what you think.
https://www.sciencedirect.com/science/article/abs/pii/S0006291X0302504X if you actually measure what % of ATP comes from glycolysis, cancers cover a wide range, and the distribution overlaps substantially with the distribution of healthy cells. glycolysis dominance is not a distinguishing characteristic of all or even most cancers.
https://journals.sagepub.com/doi/pdf/10.2310/7290.2015.00021 heavy glucose uptake is enough of a thing for PET imaging to be used clinically though
are cancer cells selectively vulnerable to mechanical stress? kinda, but also it sometimes stimulates them to go metastatic so beware.
https://analyticalsciencejournals.onlinelibrary.wiley.com/doi/full/10.1002/elsc.201900154 vibrate em and they apoptose
https://www.sciencedirect.com/science/article/abs/pii/S0168365915302819 just mcfuckin spin some tiny magnets around in your brain tumor. apparently it works in rodents but aaaaaah
https://www.sciencedirect.com/science/article/pii/S0167488913003224 laminar but not oscillatory shear stress kills em? no non-cancer comparison
https://www.sciencedirect.com/science/article/pii/S2949907024000585 vibrations to kill prostate cancer? no non-cancer comparison
https://pubs.rsc.org/en/content/articlelanding/2015/nr/c5nr03518j/unauth magnetic particle vibration against renal cancer? no non-cancer comparison
https://www.nature.com/articles/s41557-023-01383-y.epdf?sharing_token=jICYt2mKBMQ0GsetiWodv9RgN0jAjWel9jnR3ZoTv0PPtLuduvirY9e9lvJJx5Q_iJTfP9UCvLlXVOkNBly5J-gi3DlHLxMYWqsmEJBOrH0s7RbtQm1UREc3FbrfF2vDNLzTfS250KEAwBdVsczhxamax0pSp4TP23jM_ehG703560use7dJ6hnsaVLpnXsWU1n14UplHLGvaXHsJ444z96C3IEcjmnjMZvijAgkKsQ%3D&tracking_referrer=www.genengnews.com “vibronic molecular jackhammers”? still no non-cancer comparison, but at least they tried some mice
https://www.nature.com/articles/s41413-020-00111-3 vibration to reduce metastatic potential of breast cancer cells. no non-cancer comparison.
https://www.science.org/doi/10.1126/science.adp7206 gently ultrasound the tumor to sensitize it to chemo or induce an immune anti-tumor response.
https://pmc.ncbi.nlm.nih.gov/articles/PMC10068349/ Piezo1 might be involved in an apoptotic response to mechanical stress?
https://analyticalsciencejournals.onlinelibrary.wiley.com/doi/full/10.1002/elsc.201900154 vibrate some cancer cells and they go apoptotic but not necrotic. no non-cancer comparison.
https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4703126 more ultrasound, including in vivo, Piezo1 mediated.
https://pmc.ncbi.nlm.nih.gov/articles/PMC8274378/ mechanical stress is also a natural feature of cancer—tumors get more rigid and experience pressure. in fact this stress can be a trigger for increased proliferation or metastasis, so watch out!
https://pmc.ncbi.nlm.nih.gov/articles/PMC5992512/ oops shear stress can promote metastasis
https://www.cell.com/biophysj/fulltext/S0006-3495(22)00367-8 substrate stiffness promotes invasion and metastasis
https://www.frontiersin.org/journals/pharmacology/articles/10.3389/fphar.2022.955595/full “mechanoptosis” (mechanical pressure causing cancer apoptosis)
https://elifesciences.org/for-the-press/12916d1e/migrating-through-small-spaces-makes-cancer-cells-more-aggressive squish cancer cells through tight spaces (eg on a microfluidic chip) and you get more invasive/metastatic potential
are cancer cells selectively vulnerable to electrical stress? also kinda yeah
https://www.mdpi.com/2072-6694/13/9/2283 “tumor treating fields”, just an oscillating electric field, are actually an approved therapy in glioblastoma that extends life a few months. (not saying much though...glioblastoma is so deadly that it’s easy mode from an FDA standpoint)
of course you can just kill *cells* with pulsed electric fields, cancer or not: https://faseb.onlinelibrary.wiley.com/doi/abs/10.1096/fj.02-0859fje
https://jamanetwork.com/journals/jama/article-abstract/2475446 more electrical fields for glioblastoma
https://aacrjournals.org/cancerres/article/64/9/3288/517864/Disruption-of-cancer-Cell-Replication-by ah this actually IS a differential effect in tumor vs. non cancer cell lines. plus in vivo, in mice.
i don’t even know man. somebody who knows physics explain this. little nanoelectrodes with some chemical functionalization kill cancer cells? “quantum biological tunneling?” https://www.nature.com/articles/s41565-023-01496-y
https://nyulangone.org/news/coping-mechanism-suggests-new-way-make-cancer-cells-more-vulnerable-chemotherapies “stress granules” as a form of chemo resistance, driven by KRAS?
https://www.nature.com/articles/s41420-022-01202-2 cancer can be selectively vulnerable to proteotoxic stress. they’re worse at expressing heat shock proteins.
https://www.researchgate.net/profile/Pietro-Taverna-2/publication/221748736_The_Novel_Oral_Hsp90_Inhibitor_NVP-HSP990_Exhibits_Potent_and_Broad-spectrum_Antitumor_Activities_In_Vitro_and_In_Vivo/links/56d0a18708ae059e375d4920/The-Novel-Oral-Hsp90-Inhibitor-NVP-HSP990-Exhibits-Potent-and-Broad-spectrum-Antitumor-Activities-In-Vitro-and-In-Vivo.pdf heat shock protein inhibitor reduces tumor growth in many cell lines
cancer cells have depolarized membranes—you can literally distinguish them from healthy cells by voltage alone.
this is a Michael Levin thing. https://pmc.ncbi.nlm.nih.gov/articles/PMC3528107/ you can give a frog a tumor—or make the tumor go away—through manipulating voltage alone! it does not matter what ion channel you use, it’s about the voltage.
more Michael Levin https://pmc.ncbi.nlm.nih.gov/articles/PMC4267524/#R250
apparently Wnt signaling is involved. https://physoc.onlinelibrary.wiley.com/doi/abs/10.1113/JP278661 in general you get alterations in membrane voltage potential by changing the behavior of ion channels
https://jamanetwork.com/journals/jamasurgery/article-abstract/591620#google_vignette depolarization occurs early in the development of colon cancer in mice exposed to a carcinogen.
https://www.medigraphic.com/pdfs/hepato/ah-2017/ah172s.pdf cancer stem cells are depolarized relative to normal stem cells
https://www.nature.com/articles/s41598-021-92951-0.pdf here’s math modeling if you care.
https://www.frontiersin.org/journals/physiology/articles/10.3389/fphys.2013.00185/full a lot of ion channels are involved
https://karger.com/tbi/article-abstract/15/3/147/299607/Electrical-Potential-Measurements-in-Human-Breast breast cancers vs non-cancer tumors show up differently on an external volt meter!!!!
literally, you can try this at home! stick a voltmeter across your boob!
https://aacrjournals.org/cancerres/article/40/6/1830/484668/Cellular-Potentials-of-Normal-and-cancerous cancerous cells have lower membrane potential than their healthy counterparts
https://nyaspubs.onlinelibrary.wiley.com/doi/abs/10.1111/j.1749-6632.1974.tb26808.x even in non-cancer cells membrane potential correlates negatively with proliferation
https://pmc.ncbi.nlm.nih.gov/articles/PMC9652252/ cancer membrane potentials also fluctuate more than healthy cell membrane potentials
https://aacrjournals.org/amjcancer/article/32/2/240/679553/Bio-Electric-Properties-of-Cancer-Resistant-and going back to 1938, if you put a volt meter across a mouse’s body you can tell the ones with tumors from the ones without. it is literally that simple and has been known that long.
something in (some of?) the neutrophils in (some) humans and a cancer-resistant strain of mice can kill cancer, including when transferred. a Zheng Cui research program.
my take is, he’s not an immunologist and modern methods could elucidate the specific clonal population a LOT better than this, but I like the thought process.
https://www.cell.com/heliyon/fulltext/S2405-8440(17)31693-6 they did some infusions from young blood donors into 3 patients with advanced metastatic cancer got a bunch of tumor necrosis and a cytokine release syndrome. all died within 3 months though.
https://link.springer.com/article/10.1186/1475-2867-11-26 healthy controls’ leukocytes are better at killing cancer in vitro than cancer patients’. this is as expected.
https://link.springer.com/article/10.1186/1471-2407-10-179 SR/CR cancer resistant mice seem to need the leukocytes to physically home to the cancer cells. again, not news; neutrophils infiltrate tumors.
Cui has been beating this drum since before immunotherapy was cool, so let’s not blame him too much, but we do very much know this bit independently
eg https://www.frontiersin.org/journals/immunology/articles/10.3389/fimmu.2019.01710/full—sometimes neutrophils promote cancer actually!
eg https://www.cell.com/cell-reports/pdfExtended/S2211-1247(22)00984-6 we can determine the “good guy” neutrophil subpopulation that infiltrates tumors and promotes an anti-tumor immune response: it’s HLA-DR+CD80+CD86+ICAM1+PD-L1-. in metastasis these guys become PD-L1+ and immunosuppressive.
so like...the secret to replicating Zheng Cui’s miracle mice...might be nivolumab?? don’t get me wrong it’s a good drug but this is anticlimactic.
a great example of the mundanity of success
https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0059995&type=printable the Danes do not replicate quite as much cancer resistance from SR/CR mice as Cui’s lab
https://www.pnas.org/doi/pdf/10.1073/pnas.0602382103 whatever the SR/CR mice are doing, you can transfer it to other mice and get cancer resistance. Lloyd J. Old is a coauthor!!!
https://pmc.ncbi.nlm.nih.gov/articles/PMC4544930/ independent description of anti-tumor neutrophils extracted from mice
https://www.sciencedirect.com/science/article/abs/pii/S0171298510000033 more evidence of anti-tumor granulocytes(which include neutrophils)
https://www.tandfonline.com/doi/abs/10.4161/cbt.7.9.6417 cancer patients’ granulocytes are less active
more Zheng Cui: cancer cells are negatively charged, such that positively charged nanoparticles can detect them VERY specifically. this is legit IMO.
https://link.springer.com/article/10.1007/s41048-018-0080-0 works on 22 different cancer cell lines. absolutely no affinity for healthy cells, quite a bit for all cancer cells.
https://www.thno.org/v06p1887.htm
can’t do it in vivo though
https://link.springer.com/article/10.1186/s12951-019-0491-1?fromPaywallRec=false detects four CTCs per 1 mL blood!!!!
https://www.science.org/doi/10.1126/science.abm5551#supplementary-materials sadly these guys think positive nanoparticles are too toxic to use as treatments—the entire paper is about negative nanoparticles, which do sometimes add to the tumor uptake of chemotherapies
https://www.sciencedirect.com/science/article/abs/pii/S1879625715000413 oncolytic virus BHV1 kills cancer cells in a variety of tumor types?
https://www.nature.com/articles/s41598-023-47478-x.pdf broad-spectrum metastasis suppressing compounds targeting a lncRNA. scary-big chemical structures though.
https://onlinelibrary.wiley.com/doi/pdf/10.1111/j.1349-7006.2010.01834.x broad spectrum effectiveness of a survivin inhibitor + tumor regressions in vivo.
survivin is pan-essential, i got a good feeling about this
https://aacrjournals.org/cancerimmunolres/article/2/6/510/467367/VISTA-Is-a-Novel-Broad-Spectrum-Negative VISTA is another negative checkpoint regulator like PD1 and CTLA4 (which are both major successful drug targets)
https://www.science.org/doi/abs/10.1126/scitranslmed.3007646 alkylphosphocholine is a type of lipid especially present in cancer cells, across cancer types, via lipid rafts. a synthetic analog has preferential uptake in basically all rodent & human tumors. usable for imaging and radiotherapy.
https://www.nature.com/articles/s41568-023-00554-w Sanford Simon’s personal journey against fibrolamellar hepatocellular carcinoma
they found a fusion transcript and a corresponding fusion protein—the root cause
they did the reasonable thing: screen a compound library against tumor samples.
one hit is napabucasin, usually known as a STAT3 inhibitor (but that’s not the mechanism here) but somebody owns it
another was irinotecan. and navitoclax...but navitoclax has platelet toxicity
irinotecan + a BcrX PROTAC is being investigated though
or you can just. shRNA the fusion transcript. that’s a thing you can do now.
apparently Elana wanted to do that in 2013 but her dad said “pshaw RNA breaks down in the body.” now Spinraza is a thing (antisense oligonucleotide.) not to mention the mRNA world. truly these are the days of miracle and wonder.
https://www.frontiersin.org/journals/oncology/articles/10.3389/fonc.2018.00126/full redox balance is tricky, since cancers are both more prone to ROS and prone to develop coping mechanisms to regain redox balance.
https://www.nature.com/articles/cdd2017180 there’s nothing like TP53. mutated in 50-60% of cancers. the tumor suppressor gene par excellence.
https://nousresearch.com/wp-content/uploads/2024/08/Hermes-3-Technical-Report.pdf the Hermes 3 model is fine-tuned to be more responsive to prompts, such that prompt engineering suffices for “in-character” writing style (which IME does not work on most Instruct models)
neutrality (notes towards a blog post): https://roamresearch.com/#/app/srcpublic/page/Ql9YwmLas
“neutrality is impossible” is sort-of-true, actually, but not a reason to give up.
even a “neutral” college class (let’s say a standard algorithms & data structures CS class) is non-neutral relative to certain beliefs
some people object to the structure of universities and their classes to begin with;
some people may object on philosophical grounds to concepts that are unquestionably “standard” within a field like computer science.
some people may think “apolitical” education is itself unacceptable.
to consider a certain set of topics “political” and not mention them in the classroom is, implicitly, to believe that it is not urgent to resolve or act on those issues (at least in a classroom context), and therefore it implies some degree of acceptance of the default state of those issues.
our “neutral” CS class is implicitly taking a stand on certain things and in conflict with certain conceivable views. but, there’s a wide range of views, including (I think) the vast majority of the actual views of relevant parties like students and faculty, that will find nothing to object to in the class.
we need to think about neutrality in more relative terms:
what rule are you using, and what things are you claiming it will be neutral between?
what is neutrality anyway and when/why do you want it?
neutrality is a type of tactic for establishing cooperation between different entities.
one way (not the only way) to get all parties to cooperate willingly is to promise they will be treated equally.
this is most important when there is actual uncertainty about the balance of power.
eg the Dutch Republic was the first European polity to establish laws of religious tolerance, because it happened to be roughly evenly divided between multiple religions and needed to unite to win its independence.
a system is neutral towards things when it treats them the same.
there lots of ways to treat things the same:
“none of these things belong here”
eg no religion in “public” or “secular” spaces
is the “public secular space” the street? no-hijab rules?
or is it the government? no 10 Commandments in the courthouse?
“each of these things should get equal treatment”
eg Fairness Doctrine
“we will take no sides between these things; how they succeed or fail is up to you”
e.g. “marketplace of ideas”, “colorblindness”
one can always ask, about any attempt at procedural neutrality:
what things does it promise to be neutral between?
are those the right or relevant things to be neutral on?
to what degree, and with what certainty, does this procedure produce neutrality?
is it robust to being intentionally subverted?
here and now, what kind of neutrality do we want?
thanks to the Internet, we can read and see all sorts of opinions from all over the world. a wider array of worldviews are plausible/relevant/worth-considering than ever before. it’s harder to get “on the same page” with people because they may have come from very different informational backgrounds.
even tribes are fragmented. even people very similar to one another can struggle to synch up and collaborate, except in lowest-common-denominator ways that aren’t very productive.
narrowing things down to US politics, no political tribe or ideology is anywhere close to a secure monopoly. nor are “tribes” united internally.
we have relied, until now, on a deep reserve of “normality”—apolitical, even apathetic, Just The Way Things Are. In the US that means, people go to work at their jobs and get paid for it and have fun in their free time. 90′s sitcom style.
there’s still more “normality” out there than culture warriors tend to believe, but it’s fragile. As soon as somebody asks “why is this the way things are?” unexamined normality vanishes.
to the extent that the “normal” of the recent past was functional, this is a troubling development...but in general the operation of the mind is a good thing!
we just have more rapid and broader idea propagation now.
why did “open borders” and “abolish the police” and “UBI” take off recently? because these are simple ideas with intuitive appeal. some % of people will think “that makes sense, that sounds good” once they hear of them. and now, way more people are hearing those kinds of ideas.
when unexamined normality declines, conscious neutrality may become more important.
conscious neutrality for the present day needs to be aware of the wide range of what people actually believe today, and avoid the naive Panglossianism of early web 2.0.
many people believe things you think are “crazy”.
“democratization” may lead to the most popular ideas being hateful, trashy, or utterly bonkers.
on the other hand, depending on what you’re trying to get done, you may very well need to collaborate with allies, or serve populations, whose views are well outside your comfort zone.
neutrality has things to offer:
a way to build trust with people very different from yourself, without compromising your own convictions;
“I don’t agree with you on A, but you and I both value B, so I promise to do my best at B and we’ll leave A out of it altogether”
a way to reconstruct some of the best things about our “unexamined normality” and place them on a firmer foundation so they won’t disappear as soon as someone asks “why?”
a “system of the world” is the framework of your neutrality: aka it’s what you’re not neutral about.
eg:
“melting pot” multiculturalism is neutral between cultures, but does believe that they should mostly be cosmetic forms of diversity (national costumes and ethnic foods) while more important things are “universal” and shared.
democratic norms are neutral about who will win, but not that majority vote should determine the winner.
scientific norms are neutral about which disputed claims will turn out to be true, but not on what sorts of processes and properties make claims credible, and not about certain well-established beliefs
right now our system-of-the-world is weak.
a lot of it is literally decided by software affordances. what the app lets you do is what there is.
there’s a lot that’s healthy and praiseworthy about software companies and their culture, especially 10-20 years ago. but they were never prepared for that responsibility!
a stronger system-of-the-world isn’t dogmatism or naivety.
were intellectuals of the 20th, the 19th, or the 18th centuries childish because they had more explicit shared assumptions than we do? I don’t think so.
we may no longer consider some of their frameworks to be true
but having a substantive framework at all clearly isn’t incompatible with thinking independently, recognizing that people are flawed, or being open to changing your mind.
“hedgehogs” or “eternalists” are just people who consider some things definitely true.
it doesn’t mean they came to those beliefs through “blind faith” or have never questioned them.
it also doesn’t mean they can’t recognize uncertainty about things that aren’t foundational beliefs.
operating within a strongly-held, assumed-shared worldview can be functional for making collaborative progress, at least when that worldview isn’t too incompatible with reality.
mathematics was “non-rigorous”, by modern standards, until the early 20th century; and much of today’s mathematics will be considered “non-rigorous” if machine-verified proofs ever become the norm. but people were still able to do mathematics in centuries past, most of which we still consider true.
the fact that you can generate a more general framework, within which the old framework was a special case; or in which the old framework was an unprincipled assumption of the world being “nicely behaved” in some sense; does not mean that the old framework was not fruitful for learning true things.
sometimes, taking for granted an assumption that’s not literally always true (but is true mostly, more-or-less, or in the practically relevant cases) can even be more fruitful than a more radically skeptical and general view.
an *intellectual* system-of-the-world is the framework we want to use for the “republic of letters”, the sub-community of people who communicate with each other in a single conversational web and value learning and truth.
that community expanded with the printing press and again with the internet.
it is radically diverse in opinion.
it is not literally universal. not everybody likes to read and write; not everybody is curious or creative. a lot of the “most interesting people in the world” influence each other.
everybody in the old “blogosphere” was, fundamentally, the same sort of person, despite our constant arguments with each other; and not a common sort of person in the broader population; and we have turned out to be more influential than we have ever been willing to admit.
but I do think of it as a pretty big and growing tent, not confined to 300 geniuses or anything like that.
“The” conversation—the world’s symbolic information and its technological infrastructure—is something anybody can contribute to, but of course some contribute more than others.
I think the right boundary to draw is around “power users”—people who participate in that network heavily rather than occasionally.
e.g. not all academics are great innovators, but pretty much all of them are “power users” and “active contributors” to the world’s informational web.
I’m definitely a power user; I expect a lot of my readers are as well.
what do we need to not be neutral about in this context? what belongs in an intellectual system-of-the-world?
another way of asking this question: about what premises are you willing to say, not just for yourself but for the whole world and for your children’s children, “if you don’t accept this premise then I don’t care to speak to you or hear from you, forever?”
clearly that’s a high standard!
I have many values differences with, say, the author of the Epic of Gilgamesh, but I still want to read it. And I want lots of other people to be able to read it! I do not want the mind that created it to be blotted out of memory.
that’s the level of minimal shared values we’re talking about here. What do we have in common with everyone who has an interest in maintaining and extending humanity’s collective record of thought?
lack of barriers to entry is not enough.
the old Web 2.0 idea was “allow everyone to communicate with everyone else, with equal affordances.” This is a kind of “neutrality”—every user account starts out exactly the same, and anybody can make an account.
I think that’s still an underrated principle. “literally anybody can speak to anybody else who wants to listen” was an invention that created a lot of valuable affordances. we forget how painfully scarce information was when that wasn’t true!
the problem is that an information system only works when a user can find the information they seek. And in many cases, what the user is seeking is true information.
mechanisms intended to make high quality information (reliable, accurate, credible, complete, etc) preferentially discoverable, are also necessary
but they shouldn’t just recapitulate potentially-biased gatekeeping.
we want evaluative systems that, at least a priori, an ancient Sumerian could look at and say “yep, sounds fair”, even if the Sumerian wouldn’t like the “truths” that come out on top in those systems.
we really can’t be parochial here. social media companies “patched” the problem of misinformation with opaque, partisan side-taking, and they suffered for it.
how “meta” do we have to get about determining what counts as reliable or valid? well, more meta than just picking a winning side in an ongoing political dispute, that’s for sure.
probably also more “meta” than handpicking certain sources as trustworthy, the way Wikipedia does.
if we want to preserve and extend knowledge, the “republic of letters” needs intentional stewardship of the world’s information, including serious attempts at neutrality.
perceived bias, of course, turns people away from information sources.
nostalgia for unexamined normality—“just be neutral, y’know, like we were when I was young”—is not a credible offer to people who have already found your nostalgic “normal” wanting.
rigorous neutrality tactics—“we have so structured this system so that it is impossible for anyone to tamper with it in a biased fashion”—are better.
this points towards protocols.
h/t Venkatesh Rao
think: zero-knowledge proofs, formal verification, prediction markets, mechanism design, crypto-flavored governance schemes, LLM-enabled argument mapping, AI mechanistic-interpretability and “showing its work”, etc
getting fancy with the technology here often seems premature when the “public” doesn’t even want neutrality; but I don’t think it actually is.
people don’t know they want the things that don’t yet exist.
the people interested in developing “provably”, “rigorously”, “demonstrably” impartial systems are exactly the people you want to attract first, because they care the most.
getting it right matters.
a poorly executed attempt either fizzles instantly; or it catches on but its underlying flaws start to make it actively harmful once it’s widely culturally influential.
OTOH, premature disputes on technology and methods are undesirable.
remember there aren’t very many of you/us. that is:
pretty much everybody who wants to build rigorous neutrality, no matter why they want it or how they want to implement it, is a potential ally here.
the simple fact of wanting to build a “better” world that doesn’t yet exist is a commonality, not to be taken for granted. most people don’t do this at all.
the “softer” side, mutual support and collegiality, are especially important to people whose dreams are very far from fruition. people in this situation are unusually prone to both burnout and schism. be warm and encouraging; it helps keep dreams alive.
also, the whole “neutrality” thing is a sham if we can’t even engage with collaborators with different views and cultural styles.
also, “there aren’t very many of us” in the sense that none of these envisioned new products/tools/institutions are really off the ground yet, and the default outcome is that none of them get there.
you are playing in a sandbox. the goal is to eventually get out of the sandbox.
you will need to accumulate talent, ideas, resources, and vibe-momentum. right now these are scarce, or scattered; they need to be assembled.
be realistic about influence.
count how many people are at the conference or whatever. how many readers. how many users. how many dollars. in absolute terms it probably isn’t much. don’t get pretentious about a “movement”, “community”, or “industry” before it’s shown appreciable results.
the “adjacent possible” people to get involved aren’t the general public, they’re the closest people in your social/communication graph who aren’t yet participating. why aren’t they part of the thing? (or why don’t you feel comfortable going to them?) what would you need to change to satisfy the people you actually know?
this is a better framing than speculating about mass appeal.
Things that many people consider controversial: evolution, sex education, history. But even for mathematical lessons, you will often find a crackpot who considers given topic controversial. (-1)×(-1) = 1? 0.999… = 1?
In general, unschooling.
In my opinion, the important functionality of schools is: (1) separating reliable sources of knowledge from bullshit, (2) designing a learning path from “I know nothing” to “I am an expert” where each step only requires the knowledge of previous steps, (3) classmates and teachers to discuss the topic with.
Without these things, learning is difficult. If an autodidact stumbles on some pseudoscience in library, even if they later figure out that it was bullshit, it is a huge waste of time. Picking up random books on a topic and finding out that I don’t understand the things they expect me to already know is disappointing. Finding people interested in the same topic can be difficult.
But everything else about education is incidental. No need to walk into the same building. No need to only have classmates of exactly the same age. The learning path doesn’t have to be linear, could be a directed oriented graph. Generally, no need to learn a specific topic at a specific age, although it makes sense to learn the topics that are prerequisites to a lot of knowledge as soon as possible. Grading is incidental; you need some feedback, but IMHO it would be better to split the knowledge into many small pieces, and grade each piece as “you get it” or “you don’t”.
...and the conclusion of my thesis is that a good educational system would focus on the essentials, and be liberal about everything else. However, there are people who object against the very things I consider essential. The educational system that would seem incredible free for me would still seem oppressive to them.
That means you can have a system neutral towards selected entities (the ones you want in the coalition), but not others. For example, you can have religious tolerance towards an explicit list of churches.
This can lead to a meta-game where some members of the coalition try to kick out someone, because they are no longer necessary. And some members strategically keeping someone in, not necessarily because they love them, but because “if they are kicked out today, tomorrow it could be me; better avoid this slippery slope”.
Examples: Various cults in USA that are obviously destructive but enjoy a lot of legal protection. Leftists establishing an exception for “Nazis”, and then expanding the definition to make it apply to anyone they don’t like. Similarly, the right calling everything they don’t like “communism”. And everyone on internet calling everything “religion”.
Or the opposite of that: “the world is biased against X, therefore we move towards true neutrality by supporting X”.
So, situations like: the organization is nominally politically neutral, but the human at an important position has political preferences… so far it is normal and maybe unavoidable, but what if there are multiple humans like that, all having the same political preference. If they start acting in a biased way, is it possible for other members to point it out.. without getting accused in turn of “bringing politics” into the organization?
They can easily create a subreddit r/anti-some-specific-way-things-are and now the opposition to the idea is forever a thing.
Basically, we need a “FAQ for normality”. The old situation was that people who were interested in a topic knew why things are certain way, and others didn’t care. If you joined the group of people who are interested, sooner or later someone explained it to you in person.
But today, someone can make a popular YouTube video containing some false explanation, and overnight you have tons of people who are suddenly interested in the topic and believe a falsehood… and the people who know how things are just don’t have the capacity to explain that to someone who lacks the fundamentals, believes a lot of nonsense, has strong opinions, and is typically very hostile to someone trying to correct them. So they just give up. But now we have the falsehood established as an “alternative truth”, and the old process of teaching the newcomers no longer works.
The solution for “I don’t have a capacity to communicate to so many ignorant and often hostile people” is to make an article or a YouTube video with an explanation, and just keep posting the link. Some people will pay attention, some people won’t, but it no longer takes a lot of your time, and it protects you from the emotional impact.
There are things for which we don’t have a good article to link, or the article is not known to many. We could fix that. In theory, school was supposed to be this kind of FAQ, but that doesn’t work in a dynamic society where new things happen after you are out of school.
Yeah, I often feel that having some kind of functionality would improve things, but the functionality is simply not there.
To some degree this is caused by companies having a monopoly on the ecosystem they create. For example, if I need some functionality for e-mail, I can make an open-source e-mail client that has it. (I think historically spam filters started like this.) If I need some functionality for Facebook… there is nothing I can do about it, other than leave Facebook but there is a problem with coordinating that.
Sometimes this is on purpose. Facebook doesn’t want me to be able to block the ads and spam, because they profit from it.
Yeah, if we share a platform, we may start examining some of its assumptions, and maybe at some moment we will collectively update. But if everyone assumes something else, it’s the Eternal September of civilization.
If we can’t agree on what is addition, we can never proceed to discuss multiplication. And we will never build math.
Sometimes this is reflected by the medium. For example, many people post comments on blogs, but only a small part of them writes blogs. By writing a blog you join the “power users”, and the beauty of it is that it is free for everyone and yet most people keep themselves out voluntarily.
(A problem coming soon: many fake “power users” powered by LLMs.)
There is a difference between reading for curiosity and reading to get reliable information. I may be curious about e.g. Aristotle’s opinion on atoms, but I am not going to use it to study chemistry.
In some way, I treat some people’s opinions as information about the world, and other people’s opinions as information about them. Both are interesting, but in a different way. It is interesting to know my neighbor’s opinion on astrology, but I am not using this information to update on astrology; I only use it to update on my neighbor.
So I guess I have two different lines: whether I care about someone as a person, and whether I trust someone as a source of knowledge. I listen to both, but I process the information differently.
Thinking about the user experience, I think it would be best if the protocol already came with three default implementations: as a website, as a desktop application, and as a smartphone app.
A website doesn’t require me to install anything; I just create an account and start using it. The downside is that the website has an owner, who can kick me out of the website. Also, I cannot verify the code. A malicious owner could probably take my password (unless we figure out some way to avoid this, that won’t be too inconvenient). Multiple websites talking to each other in a way that is as transparent for the user as possible.
A smartphone app, because that’s what most people use most of the day, especially when they are outside.
A desktop app, because that provides most options for the (technical) power user. For example, it would be nice to keep an offline archive of everything I want, delete anything I no longer want, export and import data.
links, 10/14/2024
https://milton.host.dartmouth.edu/reading_room/pl/book_1/text.shtml [[John Milton]]’s Paradise Lost, annotated online [[poetry]]
https://darioamodei.com/machines-of-loving-grace [[AI]] [[biotech]] [[Dario Amodei]] spends about half of this document talking about AI for bio, and I think it’s the most credible “bull case” yet written for AI being radically transformative in the biomedical sphere.
one caveat is that I think if we’re imagining a future with brain mapping, regeneration of macroscopic brain tissue loss, and understanding what brains are doing well enough to know why neurological abnormalities at the cell level produce the psychiatric or cognitive symptoms they do...then we probably can do brain uploading! it’s really weird to single out this one piece as pie-in-the-sky science fiction when you’re already imagining a lot of similarly ambitious things as achievable.
https://venture.angellist.com/eli-dourado/syndicate [[tech industry]] when [[Eli Dourado]] picks startups, they’re at least not boring! i haven’t vetted the technical viability of any of these, but he claims to do a lot of that sort of numbers-in-spreadsheets work.
https://forum.effectivealtruism.org/topics/shapley-values [[EA]] [[economics]] how do you assign credit (in a principled fashion) to an outcome that multiple people contributed to? Shapley values! It seems extremely hard to calculate in practice, and subject to contentious judgment calls about the assumptions you make, but maybe it’s an improvement over raw handwaving.
https://gwern.net/maze [[Gwern Branwen]] digs up the “Mr. Young” studying maze-running techniques in [[Richard Feynman]]’s “Cargo Cult Science” speech. His name wasn’t Young but Quin Fischer Curtis, and he was part of a psychology research program at UMich that published little and had little influence on the outside world, and so was “rebooted” and forgotten. Impressive detective work, though not a story with a very satisfying “moral”.
https://en.m.wikipedia.org/wiki/Cary_Elwes [[celebrities]] [[Cary Elwes]] had an ancestor who was [[Charles Dickens]]′ inspiration for Ebenezer Scrooge!
https://feministkilljoys.com/2015/06/25/against-students/ [[politics]] an old essay by [[Sara Ahmed]] in defense of trigger warnings in the classroom and in general against the accusations that “students these days” are oversensitive and illiberal.
She’s doing an interesting thing here that I haven’t wrapped my head around. She’s not making the positive case “students today are NOT oversensitive or illiberal” or “trigger warnings are beneficial,” even though she seems to believe both those things. she’s more calling into question “why has this complaint become a common talking point? what unstated assumptions does it perpetuate?” I am not sure whether this is a valid approach that’s alternate to the forms of argument I’m more used to, or a sign of weakness (a thing she’s doing only because she cannot make the positive case for the opposite of what her opponents claim.)
https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10080017/ [[cancer]][[medicine]] [[biology]] cancer preventatives are an emerging field
NSAIDS and omega-3 fatty acids prevent 95% of tumors in a tumor-prone mouse strain?!
also we’re targeting [[STAT3]] now?! that’s a thing we’re doing.
([[STAT3]] is a major oncogene but it’s a transcription factor, it lives in the cytoplasm and the nucleus, this is not easy to target with small molecules like a cell surface protein.)
https://en.m.wikipedia.org/wiki/CLARITY [[biotech]] make a tissue sample transparent so you can make 3D microscopic imaging, with contrast from immunostaining or DNA/RNA labels
https://distill.pub/2020/circuits/frequency-edges/ [[AI]] [[neuroscience]] a type of neuron in vision neural nets, the “high-low frequency detector”, has recently also been found to be a thing in literal mouse brain neurons (h/t [[Dario Amodei]]) https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10055119/
https://mosaicmagazine.com/essay/israel-zionism/2024/10/the-failed-concepts-that-brought-israel-to-october-7/ [[politics]][[Israel]][[war]] an informative and sober view on “what went wrong” leading up to Oct 7
tl;dr: Hamas consistently wants to destroy Israel and commit violence against Israelis, they say so repeatedly, and there was never going to be a long-term possibility of living peacefully side-by-side with them; Netanyahu is a tough talker but kind of a procrastinator who’s kicked the can down the road on national security issues for his entire career; catering to settlers is not in the best interests of Israel as a whole (they provoke violence) but they are an unduly powerful voting bloc; Palestinian misery is real but has been institutionalized by the structure of the Gazan state and the UN which prevents any investment into a real local economy; the “peace process” is doomed because Israel keeps offering peace and the Palestinians say no to any peace that isn’t the abolition of the State of Israel.
it’s pretty common for reasonable casual observers (eg in America) to see Israel/Palestine as a tragic conflict in which probably both parties are somewhat in the wrong, because that’s a reasonable prior on all conflicts. The more you dig into the details, though, the more you realize that “let’s live together in peace and make concessions to Palestinians as necessary” has been the mainstream Israeli position since before 1948. It’s not a symmetric situation.
[[von Economo neurons]] are spooky [[neuroscience]] https://en.wikipedia.org/wiki/Von_Economo_neuron
only found in great apes, cetaceans, and humans
concentrated in the [[anterior cingulate cortex]] and [[insular cortex]] which are closely related to the “sense of self” (i.e. interoception, emotional salience, and the perception that your e.g. hand is “yours” and it was “you” who moved it)
the first to go in [[frontotemporal dementia]]
https://www.nature.com/articles/s41467-020-14952-3 we don’t know where they project to! they are so big that we haven’t tracked them fully!
https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3953677/
https://www.wired.com/story/lee-holloway-devastating-decline-brilliant-young-coder/ the founder of Cloudflare had [[frontotemporal dementia]] [[neurology]]
[[frontotemporal dementia]] is maybe caused by misfolded proteins being passed around neuron-to-neuron, like prion disease! [[neurology]]
https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6838634/
https://www.nature.com/articles/s41467-018-06548-9.pdf inject the bad protein into a mouse and it really does spread!
https://researchfeatures.com/cell-cell-transmission-proteins-core-neurodegenerative-disease/ something similar might be happening in [[Alzheimer’s]] as well
https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3943211/ the spread of [[ALS]] through the brain is consistent with cell-to-cell transmission of misfolded proteins
It is good to have one more perspective, and perhaps also good to develop a habit to go meta. So that when someone tells you “X”, in addition to asking yourself “is X actually true?” you also consider questions like “why is this person telling me X?”, “what could they gain in this situation by making me think more about X?”, “are they perhaps trying to distract me from some other Y?”.
Because there are such things as filtered evidence, availability bias, limited cognition; and they all can be weaponized. While you are trying really hard to solve the puzzle the person gave you, they may be using your inattention to pick your pockets.
In extreme cases, it can even be a good thing to dismiss the original question entirely. Like, if you are trying to leave an abusive religious cult, and the leader gives you a list of “ten thousand extremely serious theological questions you need to think about deeply before you make the potentially horrible mistake of damning your soul by leaving this holy group”, you should not actually waste your time thinking about them, but keep planning your escape.
Now the opposite problem is that some people get so addicted to the meta that they are no longer considering the object level. “You say I’m wrong about something? Well, that’s exactly what the privileged X people love to do, don’t they?” (Yeah, they probably do. But there is still a chance that you are actually wrong about something.)
tl;dr—mentioning the meta, great; but completely avoiding the object level, weakness
So, how much meta is the right amount of meta? Dunno, that’s a meta-meta question. At some point you need to follow your intuition and hope that your priors aren’t horribly wrong.
The situation is not symmetric, I agree. But also, it is too easy to underestimate the impact of the settlers. I mean, if you include them in the picture, then the overall Israeli position becomes more like: “Let’s live together in peace, and please ignore these few guys who sometimes come to shoot your family and take your homes. They are an extremist minority that we don’t approve of, but for complicated political reasons we can’t do anything about them. Oh, and if you try to defend yourself against them, chances are our army might come to defend them. And that’s also something we deeply regret.”
It is much better than the other side, but in my opinion still fundamentally incompatible with peace.
kinda meta, but I find myself wondering if we should handle Roam [[ tag ]] syntax in some nicer way. Probably not but it seems nice if it managed to have no downsides.
It wouldn’t collide with normal Markdown syntax use. (I can’t think of any natural examples, aside from bracket use inside links, like
[[editorial comment]](URL)
, which could be special-cased by looking for the parentheses required for the URL part of a Markdown link.) But it would be ambiguous where the wiki links point to (Sarah’s Roam wiki? English Wikipedia?), and if it pointed to somewhere other than LW2 wiki entries, then it would also be ambiguous with that too (because the syntax is copied from Mediawiki and so the same as the old LW wiki’s links).And it seems like an overloading special case you would regret in the long run, compared to something which rewrote them into regular links. Adds in a lot of complexity for a handful of uses.
I thought about manually deleting them all but I don’t feel like it.
I don’t know how familiar you are with regular expressions but you could do this with a two-pass regular expression search and replace: (I used Emacs regex format, your preferred editor might use a different format. notably, in Emacs [ is a literal bracket but ( is a literal parenthesis, for some reason)
replace “^(https://.? )([[.?]] )*” with “\1″
replace “[[(.*?)]]” with “\1″
This first deletes any tags that occur right after a hyperlink at the beginning of a line, then removes the brackets from any remaining tags.
RE Shapley values, I was persuaded by this comment that they’re less useful than counterfactual value in at least some practical situations.
links 11/08/2024: https://roamresearch.com/#/app/srcpublic/page/11-08-2024
https://agingbiotech.info/about/ a database of aging biotech companies compiled by Karl Pfleger
https://longevitylist.com/longevity-industry-database/ a database of aging biotech companies compiled by Nathan Cheng, includes somewhat different picks
GLP-1 receptor agonist drugs reduce all-cause mortality—so what diseases or causes of death do they prevent?
https://www.nature.com/articles/s41467-024-50199-y kidney disease (in type-2 diabetes patients with kidney disease)
https://www.ajmc.com/view/glp-1s-reduce-cardiovascular-risk-equally-in-patients-with-overweight-obesity-regardless-of-diabetes cardiovascular disease (in overweight or obese patients)
https://journals.sagepub.com/doi/pdf/10.1177/17562864241281903
https://www.science.org/doi/10.1126/science.adn4128 (sadly I couldn’t find the full article)
https://www.ingentaconnect.com/content/ben/cdr/2018/00000014/00000003/art00008 cardiovascular disease (in diabetics)
https://wonder.cdc.gov/controller/datarequest/D176;jsessionid=C53D7110417D14C262ECD70F0091 what are the leading causes of death in 2023?
heart disease, cancer, accidents, stroke, COPD, Alzheimer’s, diabetes, kidney disease, liver disease, COVID-19, suicide, influenza & pneumonia, hypertension, septicemia, Parkinson’s
surprised suicide was so high and that COVID-19 was still so deadly (I assume mostly in the elderly)
https://www.fiercebiotech.com/biotech/bioage-brings-almost-200m-ipo-obesity-biotech-joins-nasdaq BioAge IPO
I forgot that Sam Altman invested in Retro Bio
https://www.technologyreview.com/2023/03/08/1069523/sam-altman-investment-180-million-retro-biosciences-longevity-death/
the man has good taste. like, it’s not blindingly original to appreciate Retro, but it is eminently reasonable.
there’s a lot of moderate-Democrat post-election resignation to the effect of “this is what the country wanted; the median voter is in fact pretty OK with Trump” and “the progressive apparatus was more interested in staying in its comfort zone than winning elections”
https://substack.com/home/post/p-151278372 Jesse Singal
he was saying similar things all along: https://jessesingal.substack.com/p/democrats-should-acknowledge-reality
I’m also seeing a fair number of women going “ok, sure, there are things to criticize about feminist dogma, but actually I have experienced traditionalist religious mores and they were Not Good”, which I think is a needed corrective these days
https://substack.com/home/post/p-141175575 here’s Audrey Horne
https://backofmind.substack.com/p/incompetence-is-a-form-of-bias Dan Davies says incompetence is a form of bias—the people who have the social skills and clout to get their problems fixed, will.
Dan Davies on politics and populism...i’m not sure where he’s going here but this is intriguing.
https://substack.com/home/post/p-151264334
https://esmeralda.org/ Esmerelda, Devon Zeugel’s Chautauqua-inspired village in California
links 11/6/2024: https://roamresearch.com/#/app/srcpublic/page/11-06-2024
https://angrystaffofficer.com/2018/09/19/if-the-hoth-crash-was-an-air-force-investigation/
this taught me the phrase “mishap pilot”
https://pmc.ncbi.nlm.nih.gov/articles/PMC5656536/ this is measles virus used against relapsed multiple myeloma; one complete response out of 32 patients.
https://www.nature.com/articles/s41375-020-0828-7.pdf the one patient with the CR had strong T-cell responses to measles virus proteins. suggests that when this works it’s via immune response.
https://ajronline.org/doi/pdf/10.2214/AJR.09.3672 it works on mouse pancreatic cancer
https://ascopubs.org/doi/abs/10.1200/JCO.2022.40.6_suppl.509 seems to be able to treat bladder cancer?
https://pmc.ncbi.nlm.nih.gov/articles/PMC3018921/ blocks medulloblastoma growth in mice
https://en.wikipedia.org/wiki/CD46 the receptor for measles virus is also frequently expressed by cancer cells
https://www.frontiersin.org/journals/oncology/articles/10.3389/fonc.2023.1095219/full targeting CDKs in sarcomas—there are some clinical trials happening
https://ascopubs.org/doi/abs/10.1200/PO.24.00219 palcociclib: one partial response out of 42 sarcoma patients
https://aacrjournals.org/clincancerres/article/29/17/3484/728559 suggestive in-vitro/animal evidence
targeting FGFRs in advanced solid tumors with FGFR mutations/overexpression: https://dial.uclouvain.be/pr/boreal/object/boreal%3A285422/datastream/PDF_01/view
3% complete response, 25% partial response with erdafitinib
https://pmc.ncbi.nlm.nih.gov/articles/PMC8231807/ FGFR inhibitors are typically toxic
MPNSTs cluster into two distinct types of genomic alteration with different drug vulnerabilities https://www.nature.com/articles/s41467-023-38432-6.pdf
targeting MDM2 in advanced solid tumors: there’s a trial. https://clinicaltrials.gov/study/NCT03611868
https://ascopubs.org/doi/10.1200/JCO.2022.40.16_suppl.9517 2 complete responses in melanoma, 1 PR each in liposarcoma, urothelial, and NSCLC, but none in MPNST.
http://www.annclinlabsci.org/content/46/6/627.full it’s being explored as a target in cancer
https://www.sciencedirect.com/science/article/abs/pii/S0959804920313228 21% partial response in soft tissue sarcoma to a XPO1 inhibitor + chemo
review article on XPO1 inhibition https://www.nature.com/articles/s41571-020-00442-4
https://www.centauri-dreams.org/2024/11/05/vegas-puzzling-disk/ the star Vega looks like it has a disc but no planets
https://www.nature.com/articles/s41598-024-58899-7 CD74 in cancer is an indicator for M1 macrophage infiltration, across cancer types
https://bibliome.ai/ is a resource for looking up specific genome variants and their references in the literature and open-access databases.
when i click through to references they’re often inaccurate (they are claimed to reference a variant that they do not, in fact, contain) but tbh this is also true of Google Search and Google Scholar when it comes to rare variants.
links 11/05/2024: https://roamresearch.com/#/app/srcpublic/page/11-05-2024
https://en.wikipedia.org/wiki/IMM-101 a heat-killed bacterial preparation that might actually work (with chemo) for metastatic pancreatic cancer?
https://www.annalsofoncology.org/article/S0923-7534(19)64297-3/fulltext not bad in metastatic melanoma either
https://ascopubs.org/doi/10.1200/JCO.2022.40.16_suppl.9554 melanoma: 18% CR in treatment-naive patients when combined with nivolumab. (meh, nivolumab alone is comparable)
https://pmc.ncbi.nlm.nih.gov/articles/PMC4731256/ this is one patient, but it’s metastatic pancreatic cancer, this is super hard mode
made by these guys. https://www.immodulon.com/about-us/ they don’t look crazypants
https://en.wikipedia.org/wiki/Measles_virus_encoding_the_human_thyroidal_sodium_iodide_symporter measles virus can be made oncolytic!
https://www.lymphomainfo.net/lifestyle/treatment/engineered-measles-virus-puts-myeloma-patient-into-remission
https://www.nature.com/articles/s41591-021-01544-x peptide vaccines have a terrible track record overall but this one (on metastatic melanoma, combined with nivolumab) looks good
https://www.nobelprize.org/prizes/medicine/2018/summary/ James Allison and Tasuku Honjo got the Nobel Prize for discovering the immune checkpoints CTLA4 and PD1 respectively
https://www.nature.com/articles/s41562-017-0055 Carl Hart argues against viewing addiction as a “brain disease”:
we have not found a physiological difference between the brains of addicts and non-addicts
people are more likely to get addicted to drugs when their lives are terrible; only focusing on biomedical angles on tackling drug addiction means that it’s not considered “real” drug-addiction work to try to improve underlying social problems like poverty or injustice
in particular drug-war policies are often part of the problem, and biomedical addiction research can’t critique laws
https://www.science.org/doi/abs/10.1126/science.abb5920 this one didn’t make the cutoff for my success-story post (only 1/10 patients had a CR) but it’s astonishing that it does anything at all; a fecal matter transplant resulted in a complete response (and two partial responses) upon reintroduction of PD1 immunotherapy, in metastatic melanoma patients who had failed it before.
i am so disillusioned with FMTs that i might still chalk this up to a fluke, but who knows
https://en.wikipedia.org/wiki/Imiquimod is a weird, weird drug, used for genital warts and cutaneous cancers.
it’s a TLR7 activator.
(more innate immune stuff!!)
sarah do you just like the innate immune system because it’s comprehensible? yes. yes i do. and you should too.
https://jamanetwork.com/journals/jamaoncology/fullarticle/2598488 works on cutaneous breast cancer metastatses.
note that it is TOPICAL.
really high complete response rates in metastatic cancers almost only occur when you have a topical/intratumoral/etc treatment physically localized to the tumor, frequently using an innate-immune mechanism.
that’s also the literal majority of all historical cases of spontaneous tumor regressions—they tend to happen when there’s an infection at the tumor site, causing a powerful (innate! fever, inflammation, sepsis!) immune reaction.
the innate immune system is potent, and it is nasty, which is why you want to confine it.
immune checkpoint inhibitors are real good for metastatic cancer:
https://www.tandfonline.com/doi/full/10.1080/2162402X.2016.1214788#abstract combined with radiotherapy, on melanoma brain metastases
https://ascopubs.org/doi/abs/10.1200/JCO.2018.36.15_suppl.9537 on Merkel cell carcinoma, a skin cancer
https://www.nature.com/articles/npjgenmed201637 on liver and lung metastases of basal cell carcinoma
https://www.frontiersin.org/journals/oncology/articles/10.3389/fonc.2023.1078915/full in colon cancer
https://www.frontiersin.org/journals/oncology/articles/10.3389/fonc.2020.615298/full in penile cancer
https://europepmc.org/article/med/36916116 in kidney cancer
https://pmc.ncbi.nlm.nih.gov/articles/PMC11099454/ in pancreatic ductal adenocarcinoma (whoa)
cell immunotherapies can also be amazing for metastatic cancer:
https://ar.iiarjournals.org/content/30/2/575.short this is a complete remission in metastatic renal cell carcinoma with adoptive gamma-delta T-cells (and IL-2; the innate immune system strikes again)
https://ascopubs.org/doi/full/10.1200/JCO.2014.58.9093 in cervical cancer, with tumor-infiltrating T cells
https://www.nejm.org/doi/full/10.1056/NEJMoa2028485 here’s an antibody-drug conjugate for metastatic breast cancer. not enough complete responses to make it into my post, but look at that sweet Kaplan-Meier curve.
https://link.springer.com/article/10.1245/s10434-018-07143-4 isolated limb perfusion for melanoma: get higher doses of chemo into the tumor than the patient could survive otherwise, by cutting off circulation to the limb. when this sort of thing is possible, it really, really works.
https://link.springer.com/article/10.1245/s10434-011-2030-7 and more.
https://link.springer.com/article/10.1186/s40425-018-0337-7 this is an oncolytic virus (intratumoral!) for metastatic melanoma.
https://link.springer.com/article/10.1186/s40425-018-0337-7 more oncolytic viruses that work! (also metastatic melanoma, also intralesional).
https://link.springer.com/article/10.1007/s10549-022-06678-1 I hate on growth factor-targeted therapies a lot, but there are exceptions. Herceptin is a real drug. Look at this. 69 HER2+ patients presenting with metastatic breast cancer and treated with trastuzumab as part of their initial treatment, 54% get a complete response. 41% survived 5+ years after diagnosis. This is really, really solid.
electrochemotherapy is injecting tumors with cytotoxic drugs and electroporating the tumor so the drugs get in better.
It’s only possible when you can physically access the tumor, i.e. when it’s on the skin or when you’re operating anyway (but can’t surgically remove the tumor, because if you could, you would just do that).
it also, really, really works. https://onlinelibrary.wiley.com/doi/full/10.1002/jso.23625
https://cccblog.org/2018/06/13/the-surprising-security-benefits-of-end-to-end-formal-proofs/
if you can prove your computer program does what it’s supposed to—for almost any reasonable interpretation of “what it’s supposed to”—you will, as a side effect, also prove it doesn’t have common security flaws like buffer overflows.
people I looked up while reading Neal Stephenson’s Baroque Cycle:
https://en.wikipedia.org/wiki/Caroline_of_Ansbach
https://en.wikipedia.org/wiki/Sophia_Charlotte_of_Hanover
https://en.wikipedia.org/wiki/Princess_Eleonore_Erdmuthe_of_Saxe-Eisenach
https://en.wikipedia.org/wiki/Sophia_of_Hanover
https://roamresearch.com/#/app/srcpublic/page/10-11-2024
https://www.mindthefuture.info/p/why-im-not-a-bayesian [[Richard Ngo]] [[philosophy]] I think I agree with this, mostly.
I wouldn’t say “not a Bayesian” because there’s nothing wrong with Bayes’ Rule and I don’t like the tribal connotations, but lbr, we don’t literally use Bayes’ rule very often and when we do it often reveals just how much our conclusions depend on problem framing and prior assumptions. A lot of complexity/ambiguity necessarily “lives” in the part of the problem that Bayes’ rule doesn’t touch. To be fair, I think “just turn the crank on Bayes’ rule and it’ll solve all problems” is a bit of a strawman—nobody literally believes that, do they? -- but yeah, sure, happy to admit that most of the “hard part” of figuring things out is not the part where you can mechanically apply probability.
https://www.lesswrong.com/posts/YZvyQn2dAw4tL2xQY/rationalists-are-missing-a-core-piece-for-agent-like [[tailcalled]] this one is actually interesting and novel; i’m not sure what to make of it. maybe literal physics, with like “forces”, matters and needs to be treated differently than just a particular pattern of information that you could rederive statistically from sensory data? I kind of hate it but unlike tailcalled I don’t know much about physics-based computational models...[[philosophy]]
https://alignbio.org/ [[biology]] [[automation]] datasets generated by the Emerald Cloud Lab! [[Erika DeBenedectis]] project. Seems cool!
https://www.sciencedirect.com/science/article/abs/pii/S0306453015009014?via%3Dihub [[psychology]] the forced swim test is a bad measure of depression.
when a mouse trapped in water stops struggling, that is not “despair” or “learned helplessness.” these are anthropomorphisms. the mouse is in fact helpless, by design; struggling cannot save it; immobility is adaptive.
in fact, mice become immobile faster when they have more experience with the test. they learn that struggling is not useful and they retain that knowledge.
also, a mouse in an acute stress situation is not at all like a human’s clinical depression, which develops gradually and persists chronically.
https://www.sciencedirect.com/science/article/abs/pii/S1359644621003615?via%3Dihub the forced swim test also doesn’t predict clinical efficacy of antidepressants well. (admittedly this study was funded by PETA, which thinks the FST is cruel to mice)
https://en.wikipedia.org/wiki/Copy_Exactly! [[semiconductors]] the Wiki doesn’t mention that Copy Exactly was famously a failure. even when you try to document procedures perfectly and replicate them on the other side of the world, at unprecedented precision, it is really really hard to get the same results.
https://neuroscience.stanford.edu/research/funded-research/optimization-african-killifish-platform-rapid-drug-screening-aggregate [[biology]] you know what’s cool? building experimentation platforms for novel model organisms. Killifish are the shortest-lived vertebrate—which is great if you want to study aging. they live in weird oxygen-poor freshwater zones that are hard to replicate in the lab. figuring out how to raise them in captivity and standardize experiments on them is the kind of unsung, underfunded accomplishment we need to celebrate and expand WAY more.
https://www.nature.com/articles/513481a [[biology]] [[drug discovery]] ever heard of curcumin doing something for your health? resveratrol? EGCG? those are all natural compounds that light up a drug screen like a Christmas tree because they react with EVERYTHING. they are not going to work on your disease in real life.
they’re called PAINs, pan-assay interference compounds, and if you’re not a chemist (or don’t consult one) your drug screen is probably full of ’em. false positives on academic drug screens (Big Pharma usually knows better) are a scourge. https://en.wikipedia.org/wiki/Pan-assay_interference_compounds
sadly, while they make automated PAINs alerts, they don’t work for shit. https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5411023/ sorry, shut-ins and cheapskates; you might have to talk to an actual chemist.
https://en.wikipedia.org/wiki/Fetal_bovine_serum [[biotech]] this cell culture medium is just...cow juice. it is not consistent batch to batch. this is a big problem.
https://www.nature.com/articles/s42255-021-00372-0 [[biology]] mice housed at “room temperature” are too cold for their health; they are more disease-prone, which calls into question a lot of experimental results.
https://calteches.library.caltech.edu/51/2/CargoCult.htm [[science]] the famous [[Richard Feynman]] “Cargo cult science” essay is about flawed experimental methods!
if your rat can smell the location of the cheese in the maze all along, then your maze isn’t testing learning.
errybody want to test rats in mazes, ain’t nobody want to test this janky-ass maze!
https://fastgrants.org/ [[metascience]] [[COVID-19]] this was cool, we should bring it back for other stuff
https://erikaaldendeb.substack.com/cp/147525831 [[biotech]] engineering biomanufacturing microbes for surviving on Mars?!
https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8278038/ [[prediction markets]] DARPA tried to use prediction markets to predict the success of projects. it didn’t work! they couldn’t get enough participants.
https://www.citationfuture.com/ [[prediction markets]] these guys do prediction markets on science
https://jamesclaims.substack.com/p/how-should-we-fund-scientific-error [[metascience]] [[James Heathers]] has a proposal for a science error detection (fraud, bad research, etc) nonprofit. We should fund him to do it!!
https://en.wikipedia.org/wiki/Elisabeth_Bik [[metascience]] [[Elizabeth Bik]] is the queen of research fraud detection. pay her plz.
https://substack.com/home/post/p-149791027 [[archaeology]] it was once thought that Gobekli Tepe was a “festival city” or religious sanctuary, where people visited but didn’t live, because there wasn’t a water source. Now, they’ve found something that looks like water cisterns, and they suspect people did live there.
I don’t like the framing of “hunter-gatherer” = “nomadic” in this post.
We keep pushing the date of agriculture farther back in time. We keep discovering that “hunter-gatherers” picking plants in “wild” forests are actually doing some degree of forest management, planting seeds, or pulling undesirable weeds. Arguably there isn’t a hard-and-fast distinction between “gathering” and “gardening”. (Grain agriculture where you use a plow and completely clear a field for planting your crop is qualitatively different from the kind of kitchen-garden-like horticulture that can be done with hand tools and without clearing forests. My bet is that all so-called hunter-gatherers did some degree of horticulture until proven otherwise, excepting eg arctic environments)
what the water actually suggests is that people lived at Gobekli Tepe for at least part of the year. it doesn’t say what they were eating.
One of the interesting things I found when I finally tracked down the source is that one of the improved mazes before that was a 3D maze where mice had to choose vertically, keeping them in the same position horizontally, because otherwise they apparently were hearing some sort of subtle sound whose volume/direction let them gauge their position and memorize the choice. So Hunter created a stack of T-junctions, so each time they were another foot upwards/downwards, but at the same point in the room and so the same distance away from the sound source.
links 11/15/2024: https://roamresearch.com/#/app/srcpublic/page/11-15-2024
https://www.reddit.com/r/self/comments/1gleyhg/people_like_me_are_the_reason_trump_won/ a moderate/swing-voter (Obama, Trump, Biden) explains why he voted for Trump this time around:
he thinks Kamala Harris was an “empty shell” and unlikable and he felt the campaign was manipulative and deceptive.
he didn’t like that she seemed to be a “DEI hire”, but doesn’t have a problem with black or female candidates generally, it’s just that he resents cynical demographic box-checking.
this is a coherent POV—he did vote for Obama, after all. and plenty of people are like “I want the best person regardless of demographics, not a person chosen for their demographics.”
hm. why doesn’t it seem natural to portray Obama as a “DEI hire”? his campaign made a bigger deal about race than Harris’s, and he was criticized a lot for inexperience.
One guess: it’s laughable to think Obama was chosen by anyone besides himself. He was not the Democratic Party’s anointed—that was Hillary. He’s clearly an ambitious guy who wanted to be president on his own initiative and beat the odds to get the nomination. He can’t be a “DEI hire” because he wasn’t a hire at all.
another guess: Obama is clearly smart, speaks/writes in complete sentences, and welcomes lots of media attention and talks about his policies, while Harris has a tendency towards word salad, interviews poorly, avoids discussing issues, etc.
another guess: everyone seems to reject the idea that people prefer male to female candidates, but I’m still really not sure there isn’t a gender effect! This is very vibes-based on my part, and apparently the data goes the other way, so very uncertain here.
https://trevorklee.substack.com/p/if-langurs-can-drink-seawater-can Trevor Klee on adaptations for drinking seawater
Seems to me that Obama had the level of charisma that Hillary did not. (Neither do Biden or Harris). Bill Clinton had charisma, too. (So did Bernie.)
Also, imagine that you had a button that would make everyone magically forget about the race and gender for a moment. I think that the people who voted for Obama would still feel the same, but the people who voted for Hillary would need to think hard about why, and probably their only rationalization would be “so that Trump does not win”.
I am not an American, so my perception of American elections is probably extremely unrepresentative, but it felt like Obama was about “hope” and “change”, while Hillary was about “vote for Her, because she is a woman, so she deserves to be the president”.
I guess there are people (both men and women) who in principle wouldn’t vote for a woman leader. But there are also people who would be happy to give a woman a chance. Not sure which group is larger.
But the wannabe woman leader should not make her campaign about her being a woman. That feels like admitting that she has no other interesting qualities. She needs to project the aura of a competent person who just happens to be female.
In my country, I have voted for a woman candidate twice (1, 2), but they never felt like “DEI hires”. One didn’t have any woke agenda, the other was pro- some woke topics, but she never made them about her. (It was like “this is what I will support if you elect me”, not “this is what I am”.)
I voted for Hillary and wouldn’t need to think hard about why: she’s a democrat, and I generally prefer democrat policies.
links 9/14/2024: https://roamresearch.com/#/app/srcpublic/page/11-14-2024
https://archive.org/details/byte-magazine retro magazines
https://www.ribbonfarm.com/2019/09/17/weirding-diary-10/#more-6737 Venkatesh Rao on the fall of the MIT Media Lab
this stung a bit!
i have tended to think that the stuff with “intellectual-glamour” or “visionary” branding is actually pretty close to on-target. not always right, of course, often overhyped, but often still underinvested in even despite being highly hyped.
(a surprising number of famous scientists are starved for funding. a surprising number of inventions featured on TED, NYT, etc were never given resources to scale.)
I also am literally unconvinced that “Europe’s kindergarten” was less sophisticated than our own time! but it seems like a fine debate to have at leisure, not totally sure how it would play out.
he’s basically been proven right that energy has moved “underground” but that’s not a mode i can work very effectively in. if you have to be invited to participate, well, it’s probably not going to happen for me.
at the institutional level, he’s probably right that it’s wise to prepare for bad times and not get complacent. again, this was 2019; a lot of the bad times came later. i miss the good times; i want to believe they’ll come again.
links 11/13/2024: https://roamresearch.com/#/app/srcpublic/page/11-13-2024
https://amaranth.foundation/bottlenecks-of-aging the Amaranth Foundation’s bottlenecks of aging
https://www.celinehh.com/aging-field Celine Halioua on what the aging field needs—notably, more biotech companies that are prepared to run their own clinical trials specifically for aging-related endpoints.
a typical new biotech company never runs its own clinical trials—they license, partner, or get bought by pharma. but pharma’s not that into aging (yet) and nobody really has expertise in running aging-focused clinical trials, so that may need to happen first in a startup context. which means some investors have to be willing to put up more cash than usual....
https://en.wikipedia.org/wiki/Rapid_eye_movement_sleep_behavior_disorder is the rare sleep disorder that almost always progresses to Parkinson’s about 20 years later
https://pubmed.ncbi.nlm.nih.gov/12208347/ lipofuscin = cross-links.
it’s a “brown-yellow” pigmented substance (first observed under the microscope in the 19th century) that accumulates in post-mitotic cells with age.
it’s not one substance; it’s a mixture of “garbage” (mostly protein and lipid) that accumulates around the lysosome but can’t be disposed of through exocytosis.
it’s “autofluorescent”—it fluoresces in various wavelengths of light without being stained.
it accumulates more under conditions of oxidative stress like high-oxygen environments or in the presence of iron (which catalyzes oxidation reactions); it accumulates less in the presence of antioxidants and under caloric restriction.
evidence that lipofuscin accumulation causes disease or dysfunction seems a lot shakier in this paper.
https://barnacles.substack.com/p/understanding-as-an-art Laura Deming on visualization and the spiritual side of science
I was a little self-conscious about her dissatisfaction with “San Francisco courtier culture”—of course she’s much better at the hustle than I ever was, but I actually love it. If anything, I’ve more often felt hurt that so many people I know got sick of the game before I ever really figured out how to play it.
https://genomebiology.biomedcentral.com/articles/10.1186/s13059-019-1824-y some critiques of methylation clocks; the first one actually seems to have been an artifact of different distributions of cell types between old and young samples.
https://www.science.org/content/article/scientific-showdown-seeks-biological-clock-best-tracks-aging a contest for the best aging clock at predicting future mortality.
https://www.exactsciences.com/ cancer prognostic/diagnostic biomarker company
https://arxiv.org/abs/2411.04872 Epoch AI’s new math benchmark of original, very hard problems
https://arxiv.org/abs/2406.08467 a new benchmark for formal verification “hint” generation in the Dafny programming language
https://dafny.org/ ”Dafny is a verification-aware programming language that has native support for recording specifications and is equipped with a static program verifier.”
Dafny’s formal verification is based on automated SMT solvers; compared to proof assistants like Coq/Lean/etc it’s less powerful
Dafny can be compiled to familiar languages such as such as C#, Java, JavaScript, Go and Python
https://www.reddit.com/r/rust/comments/1fs12l9/what_do_you_rustaceans_think_of_dafny_language/ Rust users don’t think Dafny is practical for programming “real” things in.
https://manifund.org/projects/hire-a-dev-to-finish-and-launch-our-dating-site Shreeda Segan’s OKC-clone dating site needs $10,000 to build an MVP
https://en.m.wikipedia.org/wiki/Eubulides the guy who brought you lists of paradoxes
https://en.m.wikipedia.org/wiki/Epimenides_paradox “Epimenides the Cretan says, all Cretans are liars”
as my 6-year-old son Simon pointed out, this is not actually a paradox; to be a “liar” doesn’t mean every statement you utter is a lie.
Epimenides himself didn’t intend it to be a paradox. Apparently he disagreed with his fellow Cretans about the immortality of the god Zeus.
They fashioned a tomb for thee, O holy and high one
The Cretans, always liars, evil beasts, idle bellies!
But thou art not dead: thou livest and abidest forever,
For in thee we live and move and have our being.
— Epimenides, Cretica
Wikipedia seems to trace the idea that this is a “paradox” to Bertrand Russell.
https://en.m.wikipedia.org/wiki/Peter_the_Great
this is really badly written for a Wikipedia page. i suspect some kind of nationalist vandalism.
https://en.wikipedia.org/wiki/Russian_conquest_of_Siberia most of the conquest of Siberia actually happened before Peter the Great
https://en.wikipedia.org/wiki/Yermak_Timofeyevich the Cossack ataman who began the conquest of Siberia, under the reign of Ivan the Terrible in the 1500s.
why conquer Siberia? the fur trade.
why did it work? the khans didn’t have firearms.
he was hired by a powerful merchant family, the Stroganovs
https://en.wikipedia.org/wiki/Stroganov_family
wow. this is a very close parallel (and historically contemporaneous) with the conquistadors and privateers of England, Spain, and Portugl in the Age of Exploration...except we don’t make movies and novels about it in the West. But the swashbuckling potential is amazing.
i mean there was also genocide, to be fair.
https://daviddfriedman.substack.com/p/libertarian-poems
I’ll kind of give him Kipling and Cummings; those are genuine anti-communist, anti-monarchical-absolutism, and anti-war sentiments. Yeats is doing a different thing; I love him but he is Not Our Friend.
https://www.pewresearch.org/short-reads/2024/10/24/majority-of-americans-arent-confident-in-the-safety-and-reliability-of-cryptocurrency/ wow—a full 17% of Americans have ever owned crypto.
links 10/28/2024: https://roamresearch.com/#/app/srcpublic/page/10-28-2024
Vincent deVita, chemotherapy pioneer, reflecting on how cancer research has changed (and become more bureaucratic) since the 1960s:
https://www.nature.com/articles/nrclinonc.2009.51
https://aacrjournals.org/cancerres/article/68/21/8643/541799/A-History-of-Cancer-Chemotherapy
https://cancerhistoryproject.com/article/vince-devita-on-the-history-of-chemotherapy/
https://www.yalemedicine.org/podcasts/cancer-answers-the-history-of-chemotherapy-july-6-2008
https://www.sciencefriday.com/articles/where-we-are-in-the-war-on-cancer/
https://vincenttdevitajrmdoncancer.blogspot.com/
Michael Levin has his own team (of ~20) at Tufts working on morphogenetics: https://allencenter.tufts.edu/
with a $10M founding grant from the Allen Foundation, which I expect will not be enough to complete this research program. https://alleninstitute.org/news/the-paul-g-allen-frontiers-group-announces-allen-discovery-center-at-tufts-university/
links 10/8/24 https://roamresearch.com/#/app/srcpublic/page/10-08-2024
links 11/01/2024: https://roamresearch.com/#/app/srcpublic/page/11-01-2024
https://en.m.wikipedia.org/wiki/Neats_and_scruffies a typology of AI researchers
https://notes.andymatuschak.org/About_these_notes Andy Matuschak’s working notes, mostly about educational technology (but not educational games!)
https://notes.andymatuschak.org/zUVBJdPc4kBud5fsLmPFpbw
https://notes.manjarinarayan.org/ Manjari Narayan’s notes, mostly about statistics
https://www.washingtonpost.com/health/2024/05/06/ultrasound-addiction-treatment/ ultrasound being used as an addiction treatment—the full study results aren’t published yet, but the anecdotes suggest very dramatic effects.
all drugs for neuropathic pain have poor success rates.
https://jamanetwork.com/journals/jamaneurology/fullarticle/2769608#google_vignette
https://pmc.ncbi.nlm.nih.gov/articles/PMC6452908/
https://pmc.ncbi.nlm.nih.gov/articles/PMC10711341/
https://pubmed.ncbi.nlm.nih.gov/24291734/ lots of people—maybe 6-10% of the world population—have neuropathic pain.
https://pmc.ncbi.nlm.nih.gov/articles/PMC3201926/ chronic pain generally affects about 20% of adults worldwide.
https://www.ncbi.nlm.nih.gov/books/NBK553030/
https://pmc.ncbi.nlm.nih.gov/articles/PMC3201926/
https://pmc.ncbi.nlm.nih.gov/articles/PMC6676152/
roughly half of opioid addicts treated with buprenorphine or methadone manage to abstain for 30 days after treatment: https://pubmed.ncbi.nlm.nih.gov/26599131/
https://www.whitehouse.gov/ondcp/briefing-room/2021/05/28/biden-harris-administration-calls-for-historic-levels-of-funding-to-prevent-and-treat-addiction-and-overdose/ the Biden-Harris administration has allocated $41B to preventing and treating drug addiction; hard to extract from that exactly how much is spent on rehab/treatment vs. anti-drug campaigns or law enforcement
https://www.forbes.com/sites/danmunro/2015/04/27/inside-the-35-billion-addiction-treatment-industry/ US addiction treatment spending was estimated at $35B/year back in 2015
Vampire Weekend’s Ezra Koenig:
their latest album Only God Was Above Us is wrenching and it’s kind of getting to me lately.
most of the commentary in interviews is about how Koening, now 40 with a 5-year-old kid, has matured and found peace (though if you listen to the lyrics it’s an extremely nihilistic sort of being “at peace” with a terrible world and giving up on trying to change it)
nobody is remarking on what I see as pretty explicit themes like:
last album’s “Harmony Hall” was about a sense of betrayal regarding Ivy-League antisemitism
this album is pretty clearly a rejection of the backlash, the Gen-X (“Gen X Cops”), ex-Eastern-Bloc (“Pravda”), or specifically Jewish (in the [[Bari Weiss]]/Tablet-mag vein) “vibe shift”.
there’s a lot of reflection on heritage and generation gaps, there’s the sense that someone (his elders? his family?) is pushing him in a direction and he doesn’t want to go that way, he thinks it doesn’t make sense in his generation, in this era, but he does care enough to be conflicted and to yearn over the pain of people still (mistakenly, he thinks) struggling (“Capricorn”).
https://en.wikipedia.org/wiki/Ezra_Koenig
https://people.com/vampire-weekend-ezra-koenig-finally-feels-adult-exclusive-8625179
https://www.theguardian.com/us-news/2016/jun/20/bernie-sanders-vampire-weekend-grizzly-bear-endorsements
https://www.theguardian.com/music/2024/mar/23/ezra-koenig-vampire-weekend-interview
https://www.thejc.com/life-and-culture/music/vampire-weekend-dont-call-us-white-c3xbezac
links 10/1/24
https://roamresearch.com/#/app/srcpublic/page/10-01-2024
links 11/07/2024: https://roamresearch.com/#/app/srcpublic/page/11-07-2024
on Donna Karan
https://www.vogue.com/slideshow/donna-karan-seven-career-highlights-cold-shoulder
https://www.vogue.com/article/donna-karan-vintage
https://du42p.r.a.d.sendibm1.com/mk/mr/sh/1f8JAEjGcfF85pENVqcuM6hh5D/tiVMw3KFimvC?fbclid=IwY2xjawGZnpRleHRuA2FlbQIxMQABHdHszEMzS3x1Xda2tTh60KMighJEJKDe30rsduQmFydSSyfTpK8mwG50vg_aem_IjjUQWdiwZf2AdgdLo9azA opinion on how hormonal contraception should be done differently—I’m intrigued but I haven’t yet checked these claims out
http://esr.ibiblio.org/?p=8720 Eric S. Raymond on “user stories” done right and wrong
https://endpts.com/biotech-industry-worries-over-potential-for-rfk-jr-ally-as-fda-pick/ Casey Means has been floated as the new pick for FDA head; apparently she’s expressed concerns about vaccines and over-medication on the Joe Rogan podcast and has written a book about how most chronic diseases can be prevented by healthy lifestyles (which probably overstates the case)
https://www.slowboring.com/p/the-tyranny-of-climate-targets Matt Yglesias on why a lot of aggressive climate targets are impossible to actually meet.
why do people try anyway? if it’s “cheap talk”, why is there so much costly, substantive follow-through? incentive misalignment, I suppose?
https://www.gordian.bio/blog/the-in-vivo-screening-revolution/ Martin Borch Jensen on in-vivo screening
links 10/30/2024: https://roamresearch.com/#/app/srcpublic/page/10-30-2024
https://pmc.ncbi.nlm.nih.gov/articles/PMC10136898/ FRET is a biosensor modality.
“FRET is a non-radiative transfer of energy from an excited donor fluorophore molecule to a nearby acceptor fluorophore molecule...When the biomolecule of interest is present, it can cause a change in the distance between the donor and acceptor, leading to a change in the efficiency of FRET and a corresponding change in the fluorescence intensity of the acceptor. This change in fluorescence can be used to detect and quantify the biomolecule of interest.”
advantages:
real-time
non-destructive
sensitive to very low concentrations (picomolar and nanomolar)
highly specific because it detects conformational changes in biological molecules
this article is from a not-great journal and the author clearly does not have English as a first language… at some point i will need a more reputable source, this was from googling FRET quickly
https://www.astralcodexten.com/p/the-case-against-proposition-36 Clara Collier gives the narrow, evidence-based case that shorter jail sentences didn’t cause California’s property crime wave or drug overdose death epidemic, and longer jail sentences won’t fix those problems
I’m pretty convinced but I don’t follow this topic in great detail
metastatic malignant peripheral nerve sheath tumor is pretty bad—median survival is only 8 months after metastases are detected. but one M.O. that seems to help in several case studies is “sequence the tumor, find a mutation, use a drug that’s approved for other cancer types with the same mutation.”
PD-L1 overexpression? use a PD-1 inhibitor! checkpoint immunotherapy stays winning.
https://www.spandidos-publications.com/10.3892/ol.2024.14556 sintilimab
https://aacrjournals.org/cancerimmunolres/article/7/9/1396/470072/PD-1-Inhibition-Achieves-a-Complete-Metabolic pembrolizumab
https://scholars.uthscsa.edu/en/publications/pembrolizumab-achieves-a-complete-response-in-an-nf-1-mutated-pd- pembrolizumab
https://ascopubs.org/doi/abs/10.1200/PO.18.00375 nivolumab
BRAF V600E mutation? try a BRAF inhibitor!
https://jnccn.org/view/journals/jnccn/11/12/article-p1466.xml vemurafenib
other Raf stuff: maybe sorafenib?
https://www.tandfonline.com/doi/abs/10.4161/cbt.7.6.5932
shit that doesn’t work:
sirolimus https://onlinelibrary.wiley.com/doi/full/10.1155/2020/5784876
chemo is...not great but better than nothing. some partial responses, no complete responses, survival extended by maybe a few months. mostly it seems best to have doxorubicin in the mix.
https://onlinelibrary.wiley.com/doi/full/10.1155/2017/8685638
https://ascopubs.org/doi/abs/10.1200/JCO.2024.42.16_suppl.11583
https://www.sciencedirect.com/science/article/pii/S0923753419377907
https://ascopubs.org/doi/abs/10.1200/jco.2010.28.15_suppl.e20512
https://onlinelibrary.wiley.com/doi/full/10.1155/2011/705345 ok here’s a complete response to chemo + surgery. it can ever happen.
https://ar.iiarjournals.org/content/40/3/1619.short case of long-term survival after keeping chemotherapy going a *really long time* at gradually decreasing dose and widening inter-treatment interval.
https://onlinelibrary.wiley.com/doi/abs/10.1002/ijc.33201 pazopanib, an angiogenesis inhibitor, similarly has a low response rate but can extend survival a bit
https://proof-scaling-meeting.vercel.app/ formal verification conference
https://chalmermagne.substack.com/p/death-by-a-thousand-roundtables what it’s actually like to work in UK policy. sounds dismal.
https://www.972mag.com/lavender-ai-israeli-army-gaza/ AI bombing. critical perspective on Israel.
https://goingon.org/ a timeline-based, “citizen journalism” news site.
https://statistics.berkeley.edu/about/news/steinhardt-announces-co-founding-transluce-non-profit-ai-research-lab AI interpretability nonprofit, Jacob Steinhardt
mech-interp seems like straightforwardly real and good work from a variety of perspectives on AI. helps with many risk scenarios including some x-risk scenarios; helps make the technology stronger & more reliable, which is good for the industry in the long run.
https://blog.benjaminreinhardt.com/young-people-technical-training this is straightforwardly true, yes, you should learn technical stuff.
https://www.washingtonpost.com/opinions/2024/10/28/jeff-bezos-washington-post-trust/ Jeff Bezos on why the Washington Post isn’t endorsing a Presidential candidate. this is a solidly written persuasive essay; it seemed legit to me, but I could be persuaded otherwise.
links 10/29/2024: https://roamresearch.com/#/app/srcpublic/page/10-29-2024
https://www.theguardian.com/news/2024/oct/29/acute-psychosis-inner-voices-avatar-therapy-psychiatry a therapist acting out the voices in your head might be an effective treatment for psychosis
https://www.futurehouse.org/research-announcements/wikicrow SOTA (?) paper summarization from FutureHouse
links 10/23/24:
https://roamresearch.com/#/app/srcpublic/page/10-23-2024
https://eukaryotewritesblog.com/2024/10/21/i-got-dysentery-so-you-dont-have-to/ personal experience at a human challenge trial, by the excellent Georgia Ray
https://catherineshannon.substack.com/p/the-male-mind-cannot-comprehend-the
I...guess this isn’t wrong, but it’s a kind of Take I’ve never been able to relate to myself. Maybe it’s because I found Legit True Love at age 22, but I’ve never had that feeling of “oh no the men around me are too weak-willed” (not in my neck of the woods they’re not!) or “ew they’re too interested in going to the gym” (gym rats are fine? it’s a hobby that makes you good-looking, I’m on board with this) or “they’re not attentive and considerate enough” (often a valid complaint, but typically I’m the one who’s too hyperfocused on my own work & interests) or “they’re too show-offy” (yeah it’s irritating in excess but a little bit of show-off energy is enlivening).
Look: you like Tony Soprano because he’s competent and lives by a code? But you don’t like it when a real-life guy is too competitive, intense, or off doing his own thing? I’m sorry, but that’s not how things work.
Tony Soprano can be light-hearted and always have time for the women around him because he is a fictional character. In real life, being good at stuff takes work and is sometimes stressful.
My husband is, in fact, very close to this “Tony Soprano” ideal—assertive, considerate, has “boyish charm”, lives by a “code”, is competent at lots of everyday-life things but isn’t too busy for me—and I guarantee you would not have thought to date him because he’s also nerdy and argumentative and wouldn’t fit in with the yuppie crowd.
Also like. This male archetype is a guy who fixes things for you and protects you and makes you feel good. In real life? Those guys get sad that they’re expected to give, give, give and nobody cares about their feelings. I haven’t watched The Sopranos but my understanding is that Tony is in therapy because the strain of this life is getting to him. This article doesn’t seem to have a lot of empathy with what it’s like to actually be Tony...and you probably should, if you want to marry him.
https://fas.org/publication/the-magic-laptop-thought-experiment/ from Tom Kalil, a classic: how to think about making big dreams real.
https://paulgraham.com/yahoo.html Paul Graham’s business case studies!
https://substack.com/home/post/p-150520088 a celebratory reflection on the recent Progress Conference. Yes, it was that good.
https://en.m.wikipedia.org/wiki/Hecuba in some tellings (not Homer’s), Hecuba turns into a dog from grief at the death of her son.
https://www.librariesforthefuture.bio/p/lff
a framework for thinking about aging: “1st gen” is delaying aging, which is where the field started (age1, metformin, rapamycin), while “2nd gen” is pausing (stasis), repairing (reprogramming), or replacing (transplanting), cells/tissues. 2nd gen usually uses less mature technologies (eg cell therapy, regenerative medicine), but may have a bigger and faster effect size.
“function, feeling, and survival” are the endpoints that matter.
biomarkers are noisy and speculative early proxies that we merely hope will translate to a truly healthier life for the elderly. apply skepticism.
https://substack.com/home/post/p-143303463 I always like what Maxim Raginsky has to say. you can’t do AI without bumping into the philosophy of how to interpret what it’s doing.
links 10/9/24 https://roamresearch.com/#/app/srcpublic/page/yI03T5V6t
links 8/7/2024
https://roamresearch.com/#/app/srcpublic/page/yI03T5V6t
links 10/4/2024
https://roamresearch.com/#/app/srcpublic/page/10-04-2024
links 10/2/2024:
https://roamresearch.com/#/app/srcpublic/page/10-02-2024