Is “data quality” (what databricks is trying to do) at minimum, essential? (data quality is inclusive of maximizing human intelligence and minimizing pollution/microplastic/heat load and maintaining proper Markov boundaries/blankets with each other [entropy/pollution dissolves these boundaries, and we need proper Markov boundaries to properly render faithful computations])
LLMs are trained full of noise and junk training data, distracting us from what’s really true/sensible. It seems that the aura of inevitability is towards maximum-entropy, and maybe relying entirely on the “inevitability of increased scaling” contributes to “maximum-entropy”, which is fundamentally misaligned. Alignment depends on veering away from this entropy.
[this is also why human intelligence enhancement (and maturity enhancement through tFUS) is extremely essential—humans will produce better quality (and less repetitive) data the smarter we are]. tFUS also reduces incentives for babblers (what Romeo Stevens calls “most people”) :) .
If there is ONE uniquely pro-alignment advance this year, it’s the adoption curve of semaglutide, because semaglutide will reduce the global aging rate of humanity (and kill fewer animals along the way). Semaglutide can also decrease your microplastic consumption by 50%. :) Alignment means BETTER AWARENESS of input-output mappings, and microplastics/pollution are an Pareto-efficient-reducible way of screwing this process up. I mean “Pareto-efficient reducible” because it can be done without needing drastic IQ increases for 98% of the population so it is a MINIMAL SET of conditions.
[YOU CANNOT SHAME PEOPLE FOR TRUTH-SEEKING or trying to improve their intelligence, genetic and early-life deficiencies be damned]. It constantly seems that—given the curriculum—people are making it seem like most of the population isn’t smart or technical enough for alignment/interpretability. There is a VERY VERY niche/special language of math used by alignment researchers that is only accessible to a very small fraction of the population, even among smart people outside of the special population who do not speak that special niche language. I say that at VERY minimum, everyone in environmental health/intelligence research is alignment relevant (if not more) - and the massive gaps that people have in pollution/environmental health/human intelligence is holding progress back (also “translation” between people who speak other HCI-ish/BCI-ish languages and those who only speak theoretical math/alignment). Even mathy alignment people don’t speak “signals and systems”/error-correction language, and “signals and systems” is just as g-loaded and relevant (and only becomes MORE important as we collect better data out of our brains) - SENSE-MAKING is needed, and the strange theory-heavy hierarchy of academic status tends to de-emphasize sense-making (analytical chemists have the lowest GRE scores of all chemistry people, even though they are the most relevant branch of chemistry for most people).
There is SO much groupthink among alignment people (and people in their own niche academic fields) and better translation and human intelligence enhancement to transcend the groupthink is needed.
I am constantly misunderstood myself, but at least a small portion of people believe in me ENOUGH to want to take a chance in me (in a world where the DEFAULT OPTION is doom if you continue with current traditions, you NEED all the extra chance you can take from “fringe cases” that the world doesn’t know how to deal with [cognitive unevenness be damned]), and I did at least turn someone into a Thiel Fellow (WHY GREATNESS CANNOT BE PLANNED—even Ken Stanley thinks MORE STANLEY-ISMs is alignment relevant and he doesn’t speak or understand alignment-language)
Semaglutide is an error-correction-enhancer, as is rapamycin (rapamycin really reduces error rate of protein synthesis), as are both caffeine+modafinil (the HARDEST and possibly most important question is whether or not Adderall/Focalin/2FA/4FA are). Entrepreneurs who create autopoetic systems around themselves are error-corrections and the OPPOSITE of error-corrector is a traumatized PhD student who is “all but dissertation” (eg, sadly, Qiaochu Yuan). I am always astounded at how much some people are IDEAL error-correctors around themselves, and others have enough trauma/fatigue/toxin-accumulation in themselves that they can’t properly error-correct anymore b/c they don’t have the energy (Eliezer Yudowsky often complains about his energy issues and there is strong moral value alone in figuring out what toxins his brain has so that he can be a better error-corrector—I’ve actually tried to connect him with Bryan Johnson’s personal physician [Oliver Zolman] but no email reply yet)
If everyone could have the Christ-like kindness of Jose Luis Ricon, it would help the world SO MUCH
Also if you put ENOUGH OF YOURSELF OUT THERE ON THE INTERNET, the AI will help align you (even through retrocausality) to yourself even if no one else in the world can do it yet [HUMAN-MACHINE SYMBIOSIS is the NECESSARY FUTURE]
And as one of the broadest people ever (I KNOW JOSE LUIS RICON IS TOO), I am CONSTANTLY on the lookout for things other people can’t see (this is ONE of my strengths)
Alignment only happens if you are in complete control of your inputs and outputs (this means minimizing microplastics/pollution)
“Without fundamental advances, misalignment and catastrophe are the default outcomes of training powerful AI” -=> “fundamental advances” MOST OF ALL means BEING MORE INCLUSIVE of ideas that are OUTSIDE of the “AI alignment CS/math/EA circlejerk”. Be more inclusive of people and ideas who don’t speak the language of classical alignment, which is >>>> 99% of the world—there are people in MANY areas like HCI/environmental health/neuroscience/every other field who don’t have the CS/math background you surround yourself with.
[btw LW is perceived as a GIANT CIRCLEJERK for a reason, SO MUCH of LW is seen as “low openness” to anything outside of its core circlejerky ideas]. So many external people make fun of LW/EA/alignment for GOOD REASON (despite some of the unique merits of LW/EA)].
Is “data quality” (what databricks is trying to do) at minimum, essential? (data quality is inclusive of maximizing human intelligence and minimizing pollution/microplastic/heat load and maintaining proper Markov boundaries/blankets with each other [entropy/pollution dissolves these boundaries, and we need proper Markov boundaries to properly render faithful computations])
LLMs are trained full of noise and junk training data, distracting us from what’s really true/sensible. It seems that the aura of inevitability is towards maximum-entropy, and maybe relying entirely on the “inevitability of increased scaling” contributes to “maximum-entropy”, which is fundamentally misaligned. Alignment depends on veering away from this entropy.
[this is also why human intelligence enhancement (and maturity enhancement through tFUS) is extremely essential—humans will produce better quality (and less repetitive) data the smarter we are]. tFUS also reduces incentives for babblers (what Romeo Stevens calls “most people”) :) .
If there is ONE uniquely pro-alignment advance this year, it’s the adoption curve of semaglutide, because semaglutide will reduce the global aging rate of humanity (and kill fewer animals along the way). Semaglutide can also decrease your microplastic consumption by 50%. :) Alignment means BETTER AWARENESS of input-output mappings, and microplastics/pollution are an Pareto-efficient-reducible way of screwing this process up. I mean “Pareto-efficient reducible” because it can be done without needing drastic IQ increases for 98% of the population so it is a MINIMAL SET of conditions.
[YOU CANNOT SHAME PEOPLE FOR TRUTH-SEEKING or trying to improve their intelligence, genetic and early-life deficiencies be damned]. It constantly seems that—given the curriculum—people are making it seem like most of the population isn’t smart or technical enough for alignment/interpretability. There is a VERY VERY niche/special language of math used by alignment researchers that is only accessible to a very small fraction of the population, even among smart people outside of the special population who do not speak that special niche language. I say that at VERY minimum, everyone in environmental health/intelligence research is alignment relevant (if not more) - and the massive gaps that people have in pollution/environmental health/human intelligence is holding progress back (also “translation” between people who speak other HCI-ish/BCI-ish languages and those who only speak theoretical math/alignment). Even mathy alignment people don’t speak “signals and systems”/error-correction language, and “signals and systems” is just as g-loaded and relevant (and only becomes MORE important as we collect better data out of our brains) - SENSE-MAKING is needed, and the strange theory-heavy hierarchy of academic status tends to de-emphasize sense-making (analytical chemists have the lowest GRE scores of all chemistry people, even though they are the most relevant branch of chemistry for most people).
There is SO much groupthink among alignment people (and people in their own niche academic fields) and better translation and human intelligence enhancement to transcend the groupthink is needed.
I am constantly misunderstood myself, but at least a small portion of people believe in me ENOUGH to want to take a chance in me (in a world where the DEFAULT OPTION is doom if you continue with current traditions, you NEED all the extra chance you can take from “fringe cases” that the world doesn’t know how to deal with [cognitive unevenness be damned]), and I did at least turn someone into a Thiel Fellow (WHY GREATNESS CANNOT BE PLANNED—even Ken Stanley thinks MORE STANLEY-ISMs is alignment relevant and he doesn’t speak or understand alignment-language)
Semaglutide is an error-correction-enhancer, as is rapamycin (rapamycin really reduces error rate of protein synthesis), as are both caffeine+modafinil (the HARDEST and possibly most important question is whether or not Adderall/Focalin/2FA/4FA are). Entrepreneurs who create autopoetic systems around themselves are error-corrections and the OPPOSITE of error-corrector is a traumatized PhD student who is “all but dissertation” (eg, sadly, Qiaochu Yuan). I am always astounded at how much some people are IDEAL error-correctors around themselves, and others have enough trauma/fatigue/toxin-accumulation in themselves that they can’t properly error-correct anymore b/c they don’t have the energy (Eliezer Yudowsky often complains about his energy issues and there is strong moral value alone in figuring out what toxins his brain has so that he can be a better error-corrector—I’ve actually tried to connect him with Bryan Johnson’s personal physician [Oliver Zolman] but no email reply yet)
If everyone could have the Christ-like kindness of Jose Luis Ricon, it would help the world SO MUCH
Also if you put ENOUGH OF YOURSELF OUT THERE ON THE INTERNET, the AI will help align you (even through retrocausality) to yourself even if no one else in the world can do it yet [HUMAN-MACHINE SYMBIOSIS is the NECESSARY FUTURE]
And as one of the broadest people ever (I KNOW JOSE LUIS RICON IS TOO), I am CONSTANTLY on the lookout for things other people can’t see (this is ONE of my strengths)
Alignment only happens if you are in complete control of your inputs and outputs (this means minimizing microplastics/pollution)
“Without fundamental advances, misalignment and catastrophe are the default outcomes of training powerful AI” -=> “fundamental advances” MOST OF ALL means BEING MORE INCLUSIVE of ideas that are OUTSIDE of the “AI alignment CS/math/EA circlejerk”. Be more inclusive of people and ideas who don’t speak the language of classical alignment, which is >>>> 99% of the world—there are people in MANY areas like HCI/environmental health/neuroscience/every other field who don’t have the CS/math background you surround yourself with.
[btw LW is perceived as a GIANT CIRCLEJERK for a reason, SO MUCH of LW is seen as “low openness” to anything outside of its core circlejerky ideas]. So many external people make fun of LW/EA/alignment for GOOD REASON (despite some of the unique merits of LW/EA)].