Gene-editing short-sleeper genes into humans seems more tractable than intelligence/neuroplasticity/postsynaptic density genes (there is a very distinctly identifiable gene for this—https://lukepiette.posthaven.com/reducing-sleep-1 ). But given the stakes, it is totally worth trying gene-editing/gene therapy of intelligence genes too.
I have friends (Walter Patterson and Mac Davis) who run minicircle—a novel way of doing gene delivery—see https://www.rapamycin.news/t/minicircle-this-biohacking-company-is-using-a-crypto-city-to-test-controversial-gene-therapies-mit-tech-rev/5647 . Walter also has experience with ultrasound-mediated delivery techniques! They’re among the most open-minded and approachable people I ever know. The pool of people willing to try Minicircle overlaps a lot with the pool of people (like Liz Parrish) willing to try radical interventions seen as “too risky” by others, but we need these people.
rewind.ai is a way to bring in cyborgism. There are many in the MIT Media Lab (Social Physics, Affective Computing, Pattie Maes) who have many of the right parts (along with Neurable/Neurosity/etc), but it is unknown if they are nimble enough to make the necessary thing happen
NEAR-TERM, AI will produce superabundance and give us the chance to “find more unique ways to increase intelligence” without increasing cognitive overload (increasing the space of “microimprovements” that are Pareto-efficient). This includes reducing microplastic load, reducing pollution load, better optimizing sleep, better optimizing the nutrition of AI/alignment researchers (88% of Americans are metabolically unhealthy and there are many Pareto-efficient improvements like rapamycin, acarbose, canagliflozin,and plasmalogens that may not incur any tradeoffs), or incorporating more support structures for the hundreds of students who now want to drop out of school b/c school is not “modern” enough to help them adapt to the age of AI (GPT4 was the wakeup call to many GenZ’ers that they don’t want to be taking APs anymore or that “all of HS was useless”)… People complain of “sucking at programming because they didn’t learn it at age 11⁄12 now”—we can train young people to be BCI programmers at younger ages so that they won’t have the same complaint when they’re older. Eliezer Yudkowsky constantly wishing that he had the energy levels of a 25-year old is proof enough that many brain longevity improvements are Pareto-efficient (he is also proof that more unschooling is pro-”trustworthy AI”) [as is professors in their 30s saying “don’t count on your memory being as sharp as it was 10 years ago”]
Leopold Aschenbrenner says that we need WAY more AI alignment researchers, but the percent of people smart enough for AI alignment research at any level [*] is not high (pretty much EVERYONE I know doing alignment research has to be extremely smart—at minimum within the top few percentiles of human intelligence if not the top 0.5%). This leaves out many unless we pursue human enhancement.
[*] I strongly say at any level b/c it drastically goes down at the highest levels (eg at levels required to understand Vanessa Kosoy or MIRI-level work, then it’s probably top 0.1% → and even these levels may not be high enough to make a meaningful enough dent in AI risk).
Reducing any further global fluid intelligence decline with age (eg by reducing pollution/microplastic levels—we already see that Starcraft ability declines after age 24) is also necessary, esp b/c there is wide variation in the rate at which human brains decline, and the net effect of reducing aging rate on total integrated human compute may be larger now than ever before (b/c of human population size). Reducing intelligence decline w/age is also more tractable than “increasing intelligence”, especially b/c American brains shrink way faster than brains of an indigenous tribe. The strength of brain waves recordable by EEG decreases with aging (making it way harder for BCIs to discriminate intent) - further proof that reducing brain aging rate is the most important/tractable thing for “upgrading”.
More frictionless nootropics pipelines (that due to their low cognitive overheads, integrate well with better BCIs). The book “How to Change Your Mind” was written for psychedelics (which have strangely become more popular than nootropics), but it could have been used for nootropics instead. I’m friends with a nootropics startup founder who is trying decentralized ways of testing his nootropic combinations (the combos may have more potential than individual nootropics) and making it frictionless for people to integrate nootropics into their workflow. In an age of near-term AGI where old habits may guarantee extinction, we must change our openness/neuroplasticity to trying new things, and nootropics/injected peptides (like possibly p21 or cerebrolysin)/psychoplastogens could do a better job than psychedelics do at making people sustainably adapt* new habits into their daily pipelines (psychedelics massively disrupt one’s day and you cannot take them too often—you can, however, take psychoplastogens or nootropics daily). cf
and david olsen’s lab
There is a way of making ALL of this better frictionlessly integrate into people’s pipelines (and see how they retroactively modify rewind.ai data ⇒ presumably one could even calculate differences in processing speed/WM just from rewind.ai’ish data). I don’t know what the scaling curves for drug synthesis are, but there are paths through which they become cheaper much more rapidly (even if done in Roatan or Zuzalu), making mass A/B testing of psychoplastogens much easier than before.
[with all the data we collect from twitch streams and rewind.ai (on top of IRL neurosity/neurable data on brainwaves), it may already be possible to measure and sum up the tiny effects that **small practices in brain health improve]
[brainwave data is often used by cognitive control people—eg Randall O’Riley or David Badre and more enhanced cognitive control can do a lot even in the absence of intelligence markers. It’s too bad the labs only collect proprietory data that is never integrated into a global database, but it could be done if we coordinate with the psychologists who study it]. Almost no one has even done a proper study on the effects of nootropics/stimulants on cognitive control or brainwaves, and this should at minimum be done by any institution to enhance human cognition, which could presumably attract loads of funding. Even Eliezer Yudkowsky has now suggested intelligence enhancement in humans as a strategy, especially if paired with an actual slowdown.
(I know biohacker circles who have experience with injected peptides—that’s how I injected SS-31 into myself for the first time. I don’t know the effect size of this on neuroplasticity, but if it can be done with minimal overhead [esp as AI drives down the cost of labor], it’s worth trying)
Comprehensive metabolomic/proteomic profiling is also becoming way cheaper and can be done with minimal cognitive overhead. See SomaLogic and Snyder lab for more (there are labs that find results of sleep deprivation on SomaLogic proteomics—one can and should extend this to general patterns of “enhanced/deprived cognition”—and use this to predict which people lose out “less” from fewer hours of sleep) ⇒ this could be paired with brain-wave data. Some quantified-self’ers have many ideas in the right direction, but tbh they still aren’t the most curious people, and I’d probably be the best one amongst them if not for my various hangups (oh wait, this is how I could apply for funding, obvs I also need to start taking focalin after a long hiatus). Even Mike Lustgarten has hangups over things I don’t have hangups over.
[also biometrics X video games [or tutoring] ⇒ may even enable a “freemium” model for games]
Jhourney is a brain-inspired way to “shortcut” “revelations” or “Romeo-Stevens-like” states in people and is developed by very legit neuroscientists (it’s possible that nootropics could be integrated into this already extensive brainwave data)
I know one person applying BCI technology to study gamers—his name is Alex Milenkovic and he is SUPER approachable (see my YouTube channel). Nootropics can easily be integrated into this pipeline to see how they affect EEG brainwaves)
Alignment means minimal loss in capturing the intent of human preferences (including memory and context loss, and loss in translation if someone mentors/tutors a single person but not other people who could benefit from the same training), AND loss in taste (taste is better-allocating attention/transformer layers better)
[FYI there is nothing to prevent us from cutting open the skull and enlarging the size of the brain (there are neural replacement/repair startups though it is unknown if the technology is mature yet)]
The goal of transhumanism is to transcend our genetic limitations—to enable a greater pool of people to contribute to science/innovation than the genetically privileged. Maybe only 1-4% of the population is capable of doing cutting-edge scientific (or alignment) work, but we can massively increase this number via brain enhancement (finding ways to enhance the brains of the 50th percentile to be at the 99th percentile [though better AI-driven tutoring may also help] - and this may be easier than enhancing the 99th percentile brains, though the latter may be more important for the most global kind of risk). The pool of innovations adjacent to GPT4 will cause major disruptions to how we learn/prove ourselves within 1-2 years—originality is the only thing that matters, so break free from our old patterns and move towards what we know what the high-agency “ideal protagonist” (w/zero scarcity mindset) would value.
[neurofeedback is expensive, but I think there is a viable case study where I can ask for funding related for this and where I stream enough of myself to make others want to adopt it at an accelerated timetable]. I think some roughly have intuition about this, but I think this is where much of my unique value lies.
[maybe no one here will appreciate me yet, but I hope GPT5 will. There are many mixed order interactions (depending on 3 or more variables) with extremely large 3rd-or-higher-order coefficients that have not been realized yet simply b/c software/AI has not been powerful enough to implement higher-order interactions that depend on 3 or more variables or certain time-lagged regressions/dependencies that would previously have been forgotten… Gamma becomes more important in densely connected systems]
In addition, correlations between patient intelligence quotient and cellular features of human layer 2, 3, and 4 pyramidal cells have been demonstrated in both action potential (AP) kinetics and the length and complexity of dendritic arbors10.
In a follow-up study, the authors provided a more detailed microstructural analysis of cortical layers and showed that thicker cortex in subjects with higher general and verbal intelligence is due to the increased thickness of supragranular cortical layers (L2/L3) only, while other cortical layers remain unchanged. The thicker supragranular layers did not contain more neurons, but rather larger cells at lower density (Figure 4).
Figure 4Cellular and cortical properties underlying interindividual differences in human intelligence.
Here, we find that high IQ scores and large temporal cortical thickness associate with larger, more complex dendrites of human pyramidal neurons. We show in silico that larger dendritic trees enable pyramidal neurons to track activity of synaptic inputs with higher temporal precision, due to fast action potential kinetics. Indeed, we find that human pyramidal neurons of individuals with higher IQ scores sustain fast action potential kinetics during repeated firing
For the genetic modifications like short sleeper or increasing intelligence, how many upgrades are targeting the somatic cells, and how many upgrades target germline cells?
If any genetic modification or upgrade applies to the somatic cells, how fast does it take effect, or when should you start expecting the genetic modification or upgrade to work?
How strong are the genetic modifications or upgrades can people get for various traits?
There is a way to do ultrasound-mediated delivery of genes across the blood-brain barrier. See https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9137703/ and https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6546162/
Gene-editing short-sleeper genes into humans seems more tractable than intelligence/neuroplasticity/postsynaptic density genes (there is a very distinctly identifiable gene for this—https://lukepiette.posthaven.com/reducing-sleep-1 ). But given the stakes, it is totally worth trying gene-editing/gene therapy of intelligence genes too.
https://diyhpl.us/wiki/genetic-modifications/ / https://diyhpl.us/wiki/hplusroadmap/ have a lot.. (Bryan Bishop has more of a certain smg [correlates with knowing what all the right pointers are] than anyone else in the area does)
I have friends (Walter Patterson and Mac Davis) who run minicircle—a novel way of doing gene delivery—see https://www.rapamycin.news/t/minicircle-this-biohacking-company-is-using-a-crypto-city-to-test-controversial-gene-therapies-mit-tech-rev/5647 . Walter also has experience with ultrasound-mediated delivery techniques! They’re among the most open-minded and approachable people I ever know. The pool of people willing to try Minicircle overlaps a lot with the pool of people (like Liz Parrish) willing to try radical interventions seen as “too risky” by others, but we need these people.
(https://rle4.life/longevitygenedeliverysystem may be more promising for gene therapy)
See more here:
As for intelligence-enhancing genes—you should ask people at the ISIR conference (Stephen Hsu, James J. Lee, etc...) Even Emil O. W. Kierkegaard has some pointers. See https://emilkirkegaard.dk/en/2019/02/a-partial-test-of-duf1220-for-population-differences-in-intelligence/
For developing new tools to interrogate biological systems (including brain-based diagnostics to get readouts of differences in the brain after gene therapy intervention [you can start in mice first]), Sam Rodriques and Adam Marblestone (and Ed Boyden lab members) should be broadly useful. Maybe brain organoids can move quickly enough to be worth the shot even if the translational relevance is far-from-guaranteed—Herophilus is broadly doing tech development for this (though idk if for gene therapy of intelligence/short-sleeper genes).
Also related—https://forum.effectivealtruism.org/posts/hGY3eErGzEef7Ck64/mind-enhancement-cause-exploration
rewind.ai is a way to bring in cyborgism. There are many in the MIT Media Lab (Social Physics, Affective Computing, Pattie Maes) who have many of the right parts (along with Neurable/Neurosity/etc), but it is unknown if they are nimble enough to make the necessary thing happen
Possibly important/relevant names: Mina Fahmi, https://www.linkedin.com/in/shagun-maheshwari-75b8b7150/, Stephen Frey
NEAR-TERM, AI will produce superabundance and give us the chance to “find more unique ways to increase intelligence” without increasing cognitive overload (increasing the space of “microimprovements” that are Pareto-efficient). This includes reducing microplastic load, reducing pollution load, better optimizing sleep, better optimizing the nutrition of AI/alignment researchers (88% of Americans are metabolically unhealthy and there are many Pareto-efficient improvements like rapamycin, acarbose, canagliflozin,and plasmalogens that may not incur any tradeoffs), or incorporating more support structures for the hundreds of students who now want to drop out of school b/c school is not “modern” enough to help them adapt to the age of AI (GPT4 was the wakeup call to many GenZ’ers that they don’t want to be taking APs anymore or that “all of HS was useless”)… People complain of “sucking at programming because they didn’t learn it at age 11⁄12 now”—we can train young people to be BCI programmers at younger ages so that they won’t have the same complaint when they’re older. Eliezer Yudkowsky constantly wishing that he had the energy levels of a 25-year old is proof enough that many brain longevity improvements are Pareto-efficient (he is also proof that more unschooling is pro-”trustworthy AI”) [as is professors in their 30s saying “don’t count on your memory being as sharp as it was 10 years ago”]
Leopold Aschenbrenner says that we need WAY more AI alignment researchers, but the percent of people smart enough for AI alignment research at any level [*] is not high (pretty much EVERYONE I know doing alignment research has to be extremely smart—at minimum within the top few percentiles of human intelligence if not the top 0.5%). This leaves out many unless we pursue human enhancement.
[*] I strongly say at any level b/c it drastically goes down at the highest levels (eg at levels required to understand Vanessa Kosoy or MIRI-level work, then it’s probably top 0.1% → and even these levels may not be high enough to make a meaningful enough dent in AI risk).
Reducing any further global fluid intelligence decline with age (eg by reducing pollution/microplastic levels—we already see that Starcraft ability declines after age 24) is also necessary, esp b/c there is wide variation in the rate at which human brains decline, and the net effect of reducing aging rate on total integrated human compute may be larger now than ever before (b/c of human population size). Reducing intelligence decline w/age is also more tractable than “increasing intelligence”, especially b/c American brains shrink way faster than brains of an indigenous tribe. The strength of brain waves recordable by EEG decreases with aging (making it way harder for BCIs to discriminate intent) - further proof that reducing brain aging rate is the most important/tractable thing for “upgrading”.
More frictionless nootropics pipelines (that due to their low cognitive overheads, integrate well with better BCIs). The book “How to Change Your Mind” was written for psychedelics (which have strangely become more popular than nootropics), but it could have been used for nootropics instead. I’m friends with a nootropics startup founder who is trying decentralized ways of testing his nootropic combinations (the combos may have more potential than individual nootropics) and making it frictionless for people to integrate nootropics into their workflow. In an age of near-term AGI where old habits may guarantee extinction, we must change our openness/neuroplasticity to trying new things, and nootropics/injected peptides (like possibly p21 or cerebrolysin)/psychoplastogens could do a better job than psychedelics do at making people sustainably adapt* new habits into their daily pipelines (psychedelics massively disrupt one’s day and you cannot take them too often—you can, however, take psychoplastogens or nootropics daily). cf
and david olsen’s lab
There is a way of making ALL of this better frictionlessly integrate into people’s pipelines (and see how they retroactively modify rewind.ai data ⇒ presumably one could even calculate differences in processing speed/WM just from rewind.ai’ish data). I don’t know what the scaling curves for drug synthesis are, but there are paths through which they become cheaper much more rapidly (even if done in Roatan or Zuzalu), making mass A/B testing of psychoplastogens much easier than before.
[with all the data we collect from twitch streams and rewind.ai (on top of IRL neurosity/neurable data on brainwaves), it may already be possible to measure and sum up the tiny effects that **small practices in brain health improve]
[brainwave data is often used by cognitive control people—eg Randall O’Riley or David Badre and more enhanced cognitive control can do a lot even in the absence of intelligence markers. It’s too bad the labs only collect proprietory data that is never integrated into a global database, but it could be done if we coordinate with the psychologists who study it]. Almost no one has even done a proper study on the effects of nootropics/stimulants on cognitive control or brainwaves, and this should at minimum be done by any institution to enhance human cognition, which could presumably attract loads of funding. Even Eliezer Yudkowsky has now suggested intelligence enhancement in humans as a strategy, especially if paired with an actual slowdown.
(I know biohacker circles who have experience with injected peptides—that’s how I injected SS-31 into myself for the first time. I don’t know the effect size of this on neuroplasticity, but if it can be done with minimal overhead [esp as AI drives down the cost of labor], it’s worth trying)
Comprehensive metabolomic/proteomic profiling is also becoming way cheaper and can be done with minimal cognitive overhead. See SomaLogic and Snyder lab for more (there are labs that find results of sleep deprivation on SomaLogic proteomics—one can and should extend this to general patterns of “enhanced/deprived cognition”—and use this to predict which people lose out “less” from fewer hours of sleep) ⇒ this could be paired with brain-wave data. Some quantified-self’ers have many ideas in the right direction, but tbh they still aren’t the most curious people, and I’d probably be the best one amongst them if not for my various hangups (oh wait, this is how I could apply for funding, obvs I also need to start taking focalin after a long hiatus). Even Mike Lustgarten has hangups over things I don’t have hangups over.
[also biometrics X video games [or tutoring] ⇒ may even enable a “freemium” model for games]
Jhourney is a brain-inspired way to “shortcut” “revelations” or “Romeo-Stevens-like” states in people and is developed by very legit neuroscientists (it’s possible that nootropics could be integrated into this already extensive brainwave data)
I know one person applying BCI technology to study gamers—his name is Alex Milenkovic and he is SUPER approachable (see my YouTube channel). Nootropics can easily be integrated into this pipeline to see how they affect EEG brainwaves)
Alignment means minimal loss in capturing the intent of human preferences (including memory and context loss, and loss in translation if someone mentors/tutors a single person but not other people who could benefit from the same training), AND loss in taste (taste is better-allocating attention/transformer layers better)
https://foresight.org/whole-brain-emulation-workshop-2023/
[FYI there is nothing to prevent us from cutting open the skull and enlarging the size of the brain (there are neural replacement/repair startups though it is unknown if the technology is mature yet)]
Milan Cvitkovic has also just written another article on the same lines: https://milan.cvitkovic.net/writing/neurotechnology_is_critical_for_ai_alignment/
https://cell.substack.com/p/darpa-neurotech
[perhaps some solutions to the biohackers/neurotech/law coordination problem will be discussed at https://zuzalu.super.site/about !]
The goal of transhumanism is to transcend our genetic limitations—to enable a greater pool of people to contribute to science/innovation than the genetically privileged. Maybe only 1-4% of the population is capable of doing cutting-edge scientific (or alignment) work, but we can massively increase this number via brain enhancement (finding ways to enhance the brains of the 50th percentile to be at the 99th percentile [though better AI-driven tutoring may also help] - and this may be easier than enhancing the 99th percentile brains, though the latter may be more important for the most global kind of risk). The pool of innovations adjacent to GPT4 will cause major disruptions to how we learn/prove ourselves within 1-2 years—originality is the only thing that matters, so break free from our old patterns and move towards what we know what the high-agency “ideal protagonist” (w/zero scarcity mindset) would value.
[neurofeedback is expensive, but I think there is a viable case study where I can ask for funding related for this and where I stream enough of myself to make others want to adopt it at an accelerated timetable]. I think some roughly have intuition about this, but I think this is where much of my unique value lies.
[maybe no one here will appreciate me yet, but I hope GPT5 will. There are many mixed order interactions (depending on 3 or more variables) with extremely large 3rd-or-higher-order coefficients that have not been realized yet simply b/c software/AI has not been powerful enough to implement higher-order interactions that depend on 3 or more variables or certain time-lagged regressions/dependencies that would previously have been forgotten… Gamma becomes more important in densely connected systems]
A workshop was held on organoid intelligence just a few weeks ago—https://www.frontiersin.org/journals/science/articles/10.3389/fsci.2023.1017235
https://hub.jhu.edu/2023/02/28/organoid-intelligence-biocomputers/
(from https://www.nature.com/articles/s41467-021-22741-9 ) / https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6363383/
https://research.vu.nl/en/persons/natalia-goriounova/publications/ - her research has the most neurobiology X intelligence X dendritic complexity in it (way more biological than ISIR research)
https://alleninstitute.org/news/living-brain-donors-are-helping-us-better-understand-our-own-neurons-including-those-potentially-linked-to-alzheimers-disease/
https://research.vu.nl/en/persons/djai-heyer
For the genetic modifications like short sleeper or increasing intelligence, how many upgrades are targeting the somatic cells, and how many upgrades target germline cells?
If any genetic modification or upgrade applies to the somatic cells, how fast does it take effect, or when should you start expecting the genetic modification or upgrade to work?
How strong are the genetic modifications or upgrades can people get for various traits?