“We are computer scientists. We do not lack in faith.” (Ketan Mulmuley)
MadHatter
And if people refuse to take such an attractive bet for the reason that my proposed cure sounds like it couldn’t possibly hurt anyone, and might indeed help, then I reiterate the point I made in The Alignment Agenda THEY Don’t Want You to Know About: the problem is not that my claims are prima facie ridiculous, it is that I myself am prima facie ridiculous.
I will publicly wager $100 against a single nickel with the first 10 people with extremely high LessWrong karma who want to publicly bet against my predicted RCT outcome.
https://alzheimergut.org/research/ is the place to look for all the lastest research from the gut microbiome hypothesis community.
Agreed, this is a crucial lesson of history.
Young people forget important stuff, get depressed, struggle to understand the world. That is the prediction of my model: that a bad gut microbiome would cause more neural pruning than is strictly optimal.
It is well documented that starving young people have lower IQ’s, I believe? Certainly the claim does not seem prima facie ridiculous to me.
The older you get, the more chances you have to develop a bad gut microbiome. Perhaps the actual etiology of bad gut microbiomes (which I do not claim to understand) is heavily age-correlated. Or maybe we simply do not label neural pruning induced by fake starvation perceptions as Alzheimer’s in the absence of old age.
Note that the Alzheimer’s gut microbiome have induced Alzheimer’s-like symptoms in young healthy mice by transferring something (tissue?) from the brain of human Alzheimer’s patients to the stomach of young healthy mice; thus, I consider this particular claim (young people can get Alzheimer’s) to have heavy empirical validation, if only in animal models.
Then maybe the alignment problem is a stupid problem to try to solve? I don’t believe this, and have spent the past five years working on the alignment problem. But your argument certainly seems like a general purpose argument that we could and should surrender our moral duties to a fancy algorithm as a cost-saving measure, and that anyone who opposes that is a technophobe who Does Not Get the Science.
Also, for interested readers, I am happy to post a more detailed mechanistic neuroscience explanation of my theory, but want to make sure I’m not breaking my company NDA’s by sharing it first.
What’s so bad about keeping a human in the loop forever? Do we really think we can safely abdicate our moral responsibilities?
I’m not trying to generate revenue for Wayne. I’m trying to spread his message to force the hand of the judicial system to not imprison him for longer than they already have.
A Proposed Cure for Alzheimer’s Disease???
Well, perhaps we can ask, what is reading about? Surely it involves reading through clearly presented arguments and trying to understand the process that generated them, and not presupposing any particular resolution to the question “is this person crazy” beyond the inevitable and unenviable limits imposed by our finite time on Earth.
That’s fair. I just want Wayne to get out of jail soon because he’s a personal friend of mine.
Check out my post entitled “Enkrateia” in my sequence. This is a plain language account of a safe model-based reinforcement learner using established academic language and frameworks.
That was the inspiration. It’s meant to be an RLHF cost function corresponding to the question “What would the Doctor think about what you just said?”
A Formula for Violence (and Its Antidote)
Posted an existing draft of such an approach to the tail end of my Sequence.
Enkrateia: a safe model-based reinforcement learning algorithm
I did that, and received only token engagement with my work. I will add it to my sequence.
That’s pretty fair, and an argument for me to be less trollish in my presentation. I have strong-agreed with you.
Well, I’ll just have to continue being first out the door, then, won’t I?