Humans must have gotten this ability from somewhere and it’s unlikely the brain has tons of specialized architecture for it.
This is probably a crux; I think the brain does have tons of specialized architecture for it, and if I didn’t believe that, I probably wouldn’t think thought assessment was as difficult.
The thought generator seems more impressive/fancy/magic-like to me.
Notably people’s intuitions about what is impressive/difficult tend to be inversely correlated with reality. The stereotype is (or at least used to be) that AI will be good at rationality and reasoning but struggle with creativity, humor, and intuition. This stereotype contains information since inverting it makes better-than-chance predictions about what AI has been good at so far, especially LLMs.
I think this is not a coincidence but roughly because people use “degree of conscious access” an inverse proxy for intuitive difficulty. The more unconscious something is, the more it feels like we don’t know how it works, the more difficult it intuitively seems. But I suspect degree of conscious access positively correlates with difficulty.
If sequential reasoning is mostly a single trick, things should get pretty fast now. We’ll see soon? :S
Yes; I think the “single trick” view might be mostly confirmed or falsified in as little as 2-3 years. (If I introspect I’m pretty confident that I’m not wrong here, the scenario that frightens me is more that sequential reasoning improves non-exponentially but quickly, which I think could still mean doom, even if it takes 15 years. Those feel like short timelines to me.)
This is probably a crux; I think the brain does have tons of specialized architecture for it, and if I didn’t believe that, I probably wouldn’t think thought assessment was as difficult.
I think this is also a crux.
IMO, I think the brain is mostly cortically uniform, ala Steven Byrnes, and in particular I think that the specialized architecture for thought assessment was pretty minimal.
The big driver of human success is basically something like the bitter lesson applied to biological brains, combined with humans being very well optimized for tool use, such that they can over time develop technology that is used to dominate the world (it’s also helpful that humans can cooperate reasonably below 100 people, which is more than almost all social groups, though I’ve become much more convinced that cultural learning is way less powerful than Henrich et al have said).
(There are papers which show that humans are better at scaling neurons than basically everyone else, but I can’t find them right now).
This is probably a crux; I think the brain does have tons of specialized architecture for it, and if I didn’t believe that, I probably wouldn’t think thought assessment was as difficult.
Notably people’s intuitions about what is impressive/difficult tend to be inversely correlated with reality. The stereotype is (or at least used to be) that AI will be good at rationality and reasoning but struggle with creativity, humor, and intuition. This stereotype contains information since inverting it makes better-than-chance predictions about what AI has been good at so far, especially LLMs.
I think this is not a coincidence but roughly because people use “degree of conscious access” an inverse proxy for intuitive difficulty. The more unconscious something is, the more it feels like we don’t know how it works, the more difficult it intuitively seems. But I suspect degree of conscious access positively correlates with difficulty.
Yes; I think the “single trick” view might be mostly confirmed or falsified in as little as 2-3 years. (If I introspect I’m pretty confident that I’m not wrong here, the scenario that frightens me is more that sequential reasoning improves non-exponentially but quickly, which I think could still mean doom, even if it takes 15 years. Those feel like short timelines to me.)
I think this is also a crux.
IMO, I think the brain is mostly cortically uniform, ala Steven Byrnes, and in particular I think that the specialized architecture for thought assessment was pretty minimal.
The big driver of human success is basically something like the bitter lesson applied to biological brains, combined with humans being very well optimized for tool use, such that they can over time develop technology that is used to dominate the world (it’s also helpful that humans can cooperate reasonably below 100 people, which is more than almost all social groups, though I’ve become much more convinced that cultural learning is way less powerful than Henrich et al have said).
(There are papers which show that humans are better at scaling neurons than basically everyone else, but I can’t find them right now).