I’m here pretty much just for the AI related content and discussion, and only occasionally click on other posts randomly: so I guess I’m part of the problem ;). I’m not new, I’ve been here since the beginning, and this debate is not old. I spend time here specifically because I like the LW format/interface/support much better than reddit, and LW tends to have a high concentration of thoughtful posters with a very different perspective (which I tend to often disagree with, but that’s part of the fun). I also read /r/MachineLearning/ of course, but it has different tradeoffs.
You mention filtering for Rationality and World Modeling under More Focused Recommendations—but perhaps LW could go farther in that direction? Not necessarily full subreddits, but it could be useful to have something like per user ranking adjustments based on tags, so that people could more configure/personalize their experience. Folks more interested in Rationality than AI could uprank and then see more of the former rather than the latter, etc.
AI needs Rationality, in particular.
Not everyone agrees that rationality is key, here (I know one prominent AI researcher who disagreed).
There is still a significant—and mostly unresolved—disconnect between the LW/Alignment and mainstream ML/DL communities, but the trend is arguably looking promising.
I think in some sense The Sequences are out of date.
I would say “tragically flawed”: noble in their aspirations and very well written, but overconfident in some key foundations. The sequences make some strong assumptions about how the brain works and thus the likely nature of AI, assumptions that have not aged well in the era of DL. Fortunately the sequences also instill the value of updating on new evidence.
but it could be useful to have something like per user ranking adjustments based on tags, so that people could more configure/personalize their experience.
Just to be clear, this does indeed exist. You can give a penalty or boost to any tag on your frontpage, and so shift the content in the direction of topics you are most interested in.
It currently gives fixed-size karma bonuses or penalties. I think we should likely change it to be multipliers instead, but either should get the basic job done.
I can see the logic of multipliers, but in the edge case of posts with zero or negative karma they do weird stuff. If you set big mulitpliers for 5 topics, and there is a −1 karma post that ticks every single one of those topics, then you will never see it. But you of all people are the one who should see that post, which the addition achieves.
I would expect that if someone wants to only see AI alignment post (a wish someone mentioned) saying +1000 karma would provide that result but also mess up the sorting as the karma differences become less.
A modifier of 100x should allow a user to actually only see one tag.
What? How? I’ve found like 3 different “customize” options, and none of them are this.
Side note: I’ve noticed that web & app developers these days try to make settings “intuitive” instead of just putting them all in one place, which I think is silly. Just put all settings under settings. Why on Earth are there multiple “customize” options?
I only visit the site every month or so and I use All Posts grouped by Weekly to “catch up”. It looks like that particular page does not have support for this kind of tag-specific penalty. :/
I would say “tragically flawed”: noble in their aspirations and very well written, but overconfident in some key foundations. The sequences make some strong assumptions about how the brain works and thus the likely nature of AI, assumptions that have not aged well in the era of DL. Fortunately the sequences also instill the value of updating on new evidence.
Back when the sequences were written in 2007/2008 you could roughly partition the field of AI based on beliefs around the efficiency and tractability of the brain. Everyone in AI looked at the brain as the obvious single example of intelligence, but in very different lights.
If brain algorithms are inefficient and intractable[1] then neuroscience has little to offer, and instead more formal math/CS approaches are preferred. One could call this the rationalist approach to AI, or perhaps the “and everything else approach”. One way to end up in that attractor is by reading a bunch of ev psych; EY in 2007 was clearly heavily into Tooby and Cosmides, even if he has some quibbles with them on the source of cognitive biases.
An evolutionary perspective leads one to view the mind as a crowded zoo of evolved, domain-specific programs. Each is functionally specialized for solving a different adaptive problem that arose during hominid evolutionary history, such as face recognition, foraging, mate choice, heart rate regulation, sleep management, or predator vigilance, and each is activated by a different set of cues from the environment.
Evolution, the constructor of living organisms, has no privileged tendency to build into designs principles of operation that are simple and general. (Tooby and Cosmides 1992)
EY quotes this in LOGI, 2007 (p 4), immediately followed with:
The field of Artificial Intelligence suffers from a heavy, lingering dose of genericity and black-box, blank-slate, tabula-rasa concepts seeping in from the Standard Social Sciences Model (SSSM) identified by Tooby and Cosmides (1992). The general project of liberating AI from the clutches of the SSSM is more work than I wish to undertake in this paper, but one problem that must be dealt with immediately is physics envy. The development of physics over the last few centuries has been characterized by the discovery of unifying equations which neatly underlie many complex phenomena. Most of the past fifty years in AI might be described as the search for a similar unifying principle believed to underlie the complex phenomenon of intelligence.
Physics envy in AI is the search for a single, simple underlying process, with the expectation that this one discovery will lay bare all the secrets of intelligence.
Meanwhile in the field of neuroscience there was a growing body of evidence and momentum coalescing around exactly the “physics envy” approaches EY bemoans: the universal learning hypothesis, popularized to a wider audience in On Intelligence in 2004. It is pretty much pure tabula rosa, blank-slate, genericity and black-box.
The UL hypothesis is that the brain’s vast complexity is actually emergent, best explained by simple universal learning algorithms that automatically evolve all the complex domain specific circuits as required by the simple learning objectives and implied by the training data. (Years later I presented it on LW in 2015, and I finally got around to writing up the brain efficiency issue more recently—although I literally started the earlier version of that article back in 2012.)
But then the world did this fun experiment: the rationalist/non-connectivist AI folks got most of the attention and research money, but not all of it—and then various researcher groups did their thing and tried to best each other on various benchmarks. Eventually Nvidia released cuda, a few connectivists ported ANN code to their gaming GPUs which started to break imagenet, and then a little startup founded with the mission of reverse engineering the brain by some folks who met in a neuroscience program adapted that code to play Atari and later break Go; the rest is history—as you probably know.
Turns out the connectivists and the UL hypothesis were pretty much completely right after all—proven not only by the success of DL in AI, but also by how DL is transforming neuroscience. We know now that the human brain learns complex tasks like vision and language not through kludgy complex evolved mechanisms, but through the exact same simple approximate bayesian (self-supervised) learning algorithms that drive modern DL systems.
The sequences and associated materials were designed to “raise the rationality water line” and ultimately funnel promising new minds into AI-safety. And there they succeeded, especially in those earlier years. Finding an AI safety researcher today who isn’t familiar with the sequences and LW .. well maybe they exist? But they would be unicorns. ML-safety and even brain-safety approaches are now obviously more popular, but there is still this enormous bias/inertia in AI safety stemming from the circa 2007 beliefs and knowledge crystallized and distilled into the sequences.
It’s also possible to end up in the “brains are highly efficient, but completely intractable” camp, which implies uploading as the most likely path to AI—this is where Hanson is—and closer to my beliefs circa 2000 ish before I had studied much systems neuroscience.
I think in order to be “concrete” you need to actually point to a specific portion of the sequences that rests on these foundations you speak of, because as far as I can tell none of it does.
you need to actually point to a specific portion of the sequences that rests on these foundations you speak of,
I did.
My comment has 8 links. The first is a link to “Adaptation-Executers, not Fitness-Maximizers”, which is from the sequences (“The Simple Math of Evolution”), and it opens with a quote from Tooby and Cosmides. The second is a link to the comment section from another post in that sequence (Evolutions Are Stupid (But Work Anyway)) where EY explicitly discusses T&C, saying:
They’re certainly righter than the Standard Social Sciences Model they criticize, but swung the pendulum slightly too far in the new direction.
And a bit later:
In a sense, my paper “Levels of Organization in General Intelligence” can be seen as a reply to Tooby and Cosmides on this issue; though not, in retrospect, a complete one.
My third and fourth links are then the two T&C papers discussed, and the fifth link is a key quote from EY’s paper LOGI—his reply to T&C.
Such understanding as I have of rationality, I acquired in the course of wrestling with the challenge of artificial general intelligence (an endeavor which, to actually succeed, would require sufficient mastery of rationality to build a complete working rationalist out of toothpicks and rubber bands).
So in his words, his understanding of rationality comes from thinking about AGI—so the LOGI and related quotes are relevant, as that reveals the true foundation of the sequences (his thoughts about AGI) around the time when he wrote them.
These quotes—especially the LOGI quote—clearly establish that EY has an evolved modularity like view of the brain with all that entails. He is skeptical of neural networks (even today calls them “giant inscrutable matrices”) and especially “physics envy” universal learning type explanations of the brain (bayesian brain, free energy, etc), and this subtly influences everything where the sequences discusses the brain or related topics. The overall viewpoint is that the brain is a kludgy mess riddled with cognitive biases. He not so subtly disses systems/complexity theory and neural networks.
More key to the AI risk case are posts such as “Value is Fragile” which clearly builds on his larger wordview:
Value isn’t just complicated, it’s fragile. There is more than one dimension of human value, where if just that one thing is lost, the Future becomes null. … And then there are the long defenses of this proposition, which relies on 75% of my Overcoming Bias posts,
These views espouse the complexity and fragility of human values and supposed narrowness of human mind space vs AI mind space which are core components of the AI risk arguments.
None of which is a concrete reply to anything Eliezer said inside “Adaption Executors, not Fitness Optimizers”, just a reply to what you extrapolate Eliezer’s opinion to be, because he read Tooby and Cosmides and claimed they were somewhere in the ballpark in a separate comment. So I ask again: what portion of the sequences do you have an actual problem with?
And your reply isn’t a concrete reply to any of my points.
The quotes from LOGI clearly establish exactly where EY agrees with T&C, and the other quotes establish the relevance of that to the sequences. It’s not like two separate brains wrote LOGI vs the sequences, and the other quotes establish the correspondence regardless.
This is not a law case where I’m critiquing some super specific thing EY said. Instead I’m tracing memetic influences: establishing what high level abstract brain/AI viewpoint cluster he was roughly in when he wrote the sequences, and how that influenced them. The quotes are pretty clear enough for that.
So it turns out I’m just too stupid for this high level critique. I’m only used to ones where you directly reference the content of the thing you’re asserting is tragically flawed. In order to get across to less sophisticated people like me in the future, my advice is to independently figure out how this “memetic influence” got into the sequences and then just directly refute whatever content it tainted. Otherwise us brainlets won’t be able to figure out which part of the sequences to label “not true” due to memetic influence, and won’t know if your disagreements are real or made up for contrarianism’s sake.
I think you reading way too much into the specific questionable wording of “tragically flawed”. By that I meant that they are flawed in some of the key background assumptions, how that influences thinking on AI risk/alignment, and the consequent system wide effects. I didn’t mean they are flawed at their surface level purpose—as rationalist self help and community foundations. They are very well written and concentrate a large amount of modern wisdom. But that of course isn’t the full reason for why EY wrote them: they are part of a training funnel to produce alignment researchers.
I’m here pretty much just for the AI related content and discussion, and only occasionally click on other posts randomly: so I guess I’m part of the problem ;). I’m not new, I’ve been here since the beginning, and this debate is not old. I spend time here specifically because I like the LW format/interface/support much better than reddit, and LW tends to have a high concentration of thoughtful posters with a very different perspective (which I tend to often disagree with, but that’s part of the fun). I also read /r/MachineLearning/ of course, but it has different tradeoffs.
You mention filtering for Rationality and World Modeling under More Focused Recommendations—but perhaps LW could go farther in that direction? Not necessarily full subreddits, but it could be useful to have something like per user ranking adjustments based on tags, so that people could more configure/personalize their experience. Folks more interested in Rationality than AI could uprank and then see more of the former rather than the latter, etc.
There is still a significant—and mostly unresolved—disconnect between the LW/Alignment and mainstream ML/DL communities, but the trend is arguably looking promising.
I would say “tragically flawed”: noble in their aspirations and very well written, but overconfident in some key foundations. The sequences make some strong assumptions about how the brain works and thus the likely nature of AI, assumptions that have not aged well in the era of DL. Fortunately the sequences also instill the value of updating on new evidence.
Just to be clear, this does indeed exist. You can give a penalty or boost to any tag on your frontpage, and so shift the content in the direction of topics you are most interested in.
LOL that is exactly what I wanted! Thanks :)
It currently gives fixed-size karma bonuses or penalties. I think we should likely change it to be multipliers instead, but either should get the basic job done.
I can see the logic of multipliers, but in the edge case of posts with zero or negative karma they do weird stuff. If you set big mulitpliers for 5 topics, and there is a −1 karma post that ticks every single one of those topics, then you will never see it. But you of all people are the one who should see that post, which the addition achieves.
(Not significant really though.)
You could just not have the multipliers apply to negative karma posts.
I would expect that if someone wants to only see AI alignment post (a wish someone mentioned) saying +1000 karma would provide that result but also mess up the sorting as the karma differences become less.
A modifier of 100x should allow a user to actually only see one tag.
What? How? I’ve found like 3 different “customize” options, and none of them are this.
Side note: I’ve noticed that web & app developers these days try to make settings “intuitive” instead of just putting them all in one place, which I think is silly. Just put all settings under settings. Why on Earth are there multiple “customize” options?
Nevermind. Another comments explained it. I would greatly appreciate that option also being put under settings! I would have found it much easier.
Putting it under settings does sound reasonable.
I only visit the site every month or so and I use All Posts grouped by Weekly to “catch up”. It looks like that particular page does not have support for this kind of tag-specific penalty. :/
What concretely do you have in mind here?
Back when the sequences were written in 2007/2008 you could roughly partition the field of AI based on beliefs around the efficiency and tractability of the brain. Everyone in AI looked at the brain as the obvious single example of intelligence, but in very different lights.
If brain algorithms are inefficient and intractable[1] then neuroscience has little to offer, and instead more formal math/CS approaches are preferred. One could call this the rationalist approach to AI, or perhaps the “and everything else approach”. One way to end up in that attractor is by reading a bunch of ev psych; EY in 2007 was clearly heavily into Tooby and Cosmides, even if he has some quibbles with them on the source of cognitive biases.
From Evolutionary Psychology and the Emotions:
From the Psychological Foundations of Culture:
EY quotes this in LOGI, 2007 (p 4), immediately followed with:
Meanwhile in the field of neuroscience there was a growing body of evidence and momentum coalescing around exactly the “physics envy” approaches EY bemoans: the universal learning hypothesis, popularized to a wider audience in On Intelligence in 2004. It is pretty much pure tabula rosa, blank-slate, genericity and black-box.
The UL hypothesis is that the brain’s vast complexity is actually emergent, best explained by simple universal learning algorithms that automatically evolve all the complex domain specific circuits as required by the simple learning objectives and implied by the training data. (Years later I presented it on LW in 2015, and I finally got around to writing up the brain efficiency issue more recently—although I literally started the earlier version of that article back in 2012.)
But then the world did this fun experiment: the rationalist/non-connectivist AI folks got most of the attention and research money, but not all of it—and then various researcher groups did their thing and tried to best each other on various benchmarks. Eventually Nvidia released cuda, a few connectivists ported ANN code to their gaming GPUs which started to break imagenet, and then a little startup founded with the mission of reverse engineering the brain by some folks who met in a neuroscience program adapted that code to play Atari and later break Go; the rest is history—as you probably know.
Turns out the connectivists and the UL hypothesis were pretty much completely right after all—proven not only by the success of DL in AI, but also by how DL is transforming neuroscience. We know now that the human brain learns complex tasks like vision and language not through kludgy complex evolved mechanisms, but through the exact same simple approximate bayesian (self-supervised) learning algorithms that drive modern DL systems.
The sequences and associated materials were designed to “raise the rationality water line” and ultimately funnel promising new minds into AI-safety. And there they succeeded, especially in those earlier years. Finding an AI safety researcher today who isn’t familiar with the sequences and LW .. well maybe they exist? But they would be unicorns. ML-safety and even brain-safety approaches are now obviously more popular, but there is still this enormous bias/inertia in AI safety stemming from the circa 2007 beliefs and knowledge crystallized and distilled into the sequences.
It’s also possible to end up in the “brains are highly efficient, but completely intractable” camp, which implies uploading as the most likely path to AI—this is where Hanson is—and closer to my beliefs circa 2000 ish before I had studied much systems neuroscience.
I think in order to be “concrete” you need to actually point to a specific portion of the sequences that rests on these foundations you speak of, because as far as I can tell none of it does.
I did.
My comment has 8 links. The first is a link to “Adaptation-Executers, not Fitness-Maximizers”, which is from the sequences (“The Simple Math of Evolution”), and it opens with a quote from Tooby and Cosmides. The second is a link to the comment section from another post in that sequence (Evolutions Are Stupid (But Work Anyway)) where EY explicitly discusses T&C, saying:
And a bit later:
My third and fourth links are then the two T&C papers discussed, and the fifth link is a key quote from EY’s paper LOGI—his reply to T&C.
On OB in 2006 “The martial art of rationality”, EY writes:
So in his words, his understanding of rationality comes from thinking about AGI—so the LOGI and related quotes are relevant, as that reveals the true foundation of the sequences (his thoughts about AGI) around the time when he wrote them.
These quotes—especially the LOGI quote—clearly establish that EY has an evolved modularity like view of the brain with all that entails. He is skeptical of neural networks (even today calls them “giant inscrutable matrices”) and especially “physics envy” universal learning type explanations of the brain (bayesian brain, free energy, etc), and this subtly influences everything where the sequences discusses the brain or related topics. The overall viewpoint is that the brain is a kludgy mess riddled with cognitive biases. He not so subtly disses systems/complexity theory and neural networks.
More key to the AI risk case are posts such as “Value is Fragile” which clearly builds on his larger wordview:
And of course “The Design Space of Minds in General”
These views espouse the complexity and fragility of human values and supposed narrowness of human mind space vs AI mind space which are core components of the AI risk arguments.
None of which is a concrete reply to anything Eliezer said inside “Adaption Executors, not Fitness Optimizers”, just a reply to what you extrapolate Eliezer’s opinion to be, because he read Tooby and Cosmides and claimed they were somewhere in the ballpark in a separate comment. So I ask again: what portion of the sequences do you have an actual problem with?
And your reply isn’t a concrete reply to any of my points.
The quotes from LOGI clearly establish exactly where EY agrees with T&C, and the other quotes establish the relevance of that to the sequences. It’s not like two separate brains wrote LOGI vs the sequences, and the other quotes establish the correspondence regardless.
This is not a law case where I’m critiquing some super specific thing EY said. Instead I’m tracing memetic influences: establishing what high level abstract brain/AI viewpoint cluster he was roughly in when he wrote the sequences, and how that influenced them. The quotes are pretty clear enough for that.
So it turns out I’m just too stupid for this high level critique. I’m only used to ones where you directly reference the content of the thing you’re asserting is tragically flawed. In order to get across to less sophisticated people like me in the future, my advice is to independently figure out how this “memetic influence” got into the sequences and then just directly refute whatever content it tainted. Otherwise us brainlets won’t be able to figure out which part of the sequences to label “not true” due to memetic influence, and won’t know if your disagreements are real or made up for contrarianism’s sake.
This comment contains a specific disagreement.
I think you reading way too much into the specific questionable wording of “tragically flawed”. By that I meant that they are flawed in some of the key background assumptions, how that influences thinking on AI risk/alignment, and the consequent system wide effects. I didn’t mean they are flawed at their surface level purpose—as rationalist self help and community foundations. They are very well written and concentrate a large amount of modern wisdom. But that of course isn’t the full reason for why EY wrote them: they are part of a training funnel to produce alignment researchers.
I think you may have flipped something here