And it is important to notice that o1 is an attempt to use tons of inference as a tool, to work around its G (and other) limitations, rather than an increase in G or knowledge.
This is a rather strange statement.
o1 is basically a “System 2” addition (in terms of “Thinking, fast and slow”) on top of a super-strong GPT-4o “System 1”. As far as “System 1″ entities go, GPT-4-level systems seem to me to be rather superior to the “System 1” “fast thinking” components of a human being[1].
It seems to be the case that the “System 2” part is a significant component of G of a human, and it seems to be the case that o1 does represent a “System 2″ addition on top of a GPT-4-level “System 1”. So it seems appropriate to attribute an increase of G to this addition (given that this addition does increase its general problem-solving capabilities).
Basically, “System 2” thinking still seems to be a general capability to reason and deliberate, and not a particular skill or tool.
This is a rather strange statement.
o1 is basically a “System 2” addition (in terms of “Thinking, fast and slow”) on top of a super-strong GPT-4o “System 1”. As far as “System 1″ entities go, GPT-4-level systems seem to me to be rather superior to the “System 1” “fast thinking” components of a human being[1].
It seems to be the case that the “System 2” part is a significant component of G of a human, and it seems to be the case that o1 does represent a “System 2″ addition on top of a GPT-4-level “System 1”. So it seems appropriate to attribute an increase of G to this addition (given that this addition does increase its general problem-solving capabilities).
Basically, “System 2” thinking still seems to be a general capability to reason and deliberate, and not a particular skill or tool.
If we exclude human “System 2” “slow thinking” capabilities for the purpose of this comparison.