There must be some testable part of everyday human cognition which relies on this general algorithm, right?
Well, yes, but they’re of a hard-to-verify “this is how human cognition feels like it works” format. E. g., I sometimes talk about how humans seem to be able to navigate unfamiliar environments without experience, in a way that seems to disagree with baseline shard-theory predictions. But I don’t think that’s been persuading people not already inclined to this view.
Like, at the very least, what if we looked at fMRIs of human brains while they were engaging in all the tasks you laid out above, and looked at some similarity metric between the scans?
Hm, I guess something like this might work? Not sure regarding the precise operationalization, though.
If any of the others are particularly enthusiastic about this and expect it to be high-value, sure!
That said, I personally don’t expect it to be particularly productive.
These sorts of long-standing disagreements haven’t historically been resolvable via debate (the failure of Hanson vs. Yudkowsky is kind of foundational to the field).
I think there’s great value in having a public discussion nonetheless, but I think it’s in informing the readers’ models of what different sides believe.
Thus, inasmuch as we’re having a public discussion, I think it should be optimized for thoroughly laying out one’s points to the audience.
However, dialogues-as-a-feature seem to be more valuable to the participants, and are actually harder to grok for readers.
Thus, my preferred method for discussing this sort of stuff is to exchange top-level posts trying to refute each other (the way this post is, to a significant extent, a response to the AI is easy to control article), and then maybe argue a bit in the comments. But not to have a giant tedious top-level argument.
I’d actually been planning to make a post about the difficulties the “classical alignment views” have with making empirical predictions, and I guess I can prioritize it more?
But I’m overall pretty burned out on this sort of arguing. (And arguing about “what would count as empirical evidence for you?” generally feels like too-meta fake work, compared to just going out and trying to directly dredge up some evidence.)
Not sure what the relevance is? I don’t believe that “we possess innate (and presumably God-given) concepts that are independent of the senses”, to be clear. “Children won’t be able to instantly understand how to parse a new sense and map its feedback to the sensory modalities they’ve previously been familiar with, but they’ll grok it really fast with just a few examples” was my instant prediction upon reading the titular question.
I also not sure of the relevance and not following the thread fully, but the summary of that experiment is that it takes some time (measured in nights of sleep which are rough equivalent of big batch training updates) for the newly sighted to develop vision, but less time than infants—presumably because the newly sighted already have full functioning sensor inference world models in another modality that can speed up learning through dense top down priors.
But its way way more than “grok it really fast with just a few examples”—training their new visual systems still takes non-trivial training data & time
Well, yes, but they’re of a hard-to-verify “this is how human cognition feels like it works” format. E. g., I sometimes talk about how humans seem to be able to navigate unfamiliar environments without experience, in a way that seems to disagree with baseline shard-theory predictions. But I don’t think that’s been persuading people not already inclined to this view.
The magical number 7±2 and the associated weirdness is also of the relevant genre.
Hm, I guess something like this might work? Not sure regarding the precise operationalization, though.
You willing to do a dialogue about predictions here with @jacob_cannell or @Quintin Pope or @Nora Belrose or others (also a question to those pinged)?
If any of the others are particularly enthusiastic about this and expect it to be high-value, sure!
That said, I personally don’t expect it to be particularly productive.
These sorts of long-standing disagreements haven’t historically been resolvable via debate (the failure of Hanson vs. Yudkowsky is kind of foundational to the field).
I think there’s great value in having a public discussion nonetheless, but I think it’s in informing the readers’ models of what different sides believe.
Thus, inasmuch as we’re having a public discussion, I think it should be optimized for thoroughly laying out one’s points to the audience.
However, dialogues-as-a-feature seem to be more valuable to the participants, and are actually harder to grok for readers.
Thus, my preferred method for discussing this sort of stuff is to exchange top-level posts trying to refute each other (the way this post is, to a significant extent, a response to the AI is easy to control article), and then maybe argue a bit in the comments. But not to have a giant tedious top-level argument.
I’d actually been planning to make a post about the difficulties the “classical alignment views” have with making empirical predictions, and I guess I can prioritize it more?
But I’m overall pretty burned out on this sort of arguing. (And arguing about “what would count as empirical evidence for you?” generally feels like too-meta fake work, compared to just going out and trying to directly dredge up some evidence.)
Not entirely sure what @Thane Ruthenis’ position is, but this feels like a maybe relevant piece of information: https://www.science.org/content/article/formerly-blind-children-shed-light-centuries-old-puzzle
Not sure what the relevance is? I don’t believe that “we possess innate (and presumably God-given) concepts that are independent of the senses”, to be clear. “Children won’t be able to instantly understand how to parse a new sense and map its feedback to the sensory modalities they’ve previously been familiar with, but they’ll grok it really fast with just a few examples” was my instant prediction upon reading the titular question.
I also not sure of the relevance and not following the thread fully, but the summary of that experiment is that it takes some time (measured in nights of sleep which are rough equivalent of big batch training updates) for the newly sighted to develop vision, but less time than infants—presumably because the newly sighted already have full functioning sensor inference world models in another modality that can speed up learning through dense top down priors.
But its way way more than “grok it really fast with just a few examples”—training their new visual systems still takes non-trivial training data & time