I agree that it’s plausible chess-plans can be compressed without invoking full reasoners (and with a more general point that there are degrees of compression you can do short of full-on ‘reasoner’, and with the more specific point that I was oversimplifying in my comment). My intent with my comment was to highlight how “but my AI only generates plans” is sorta orthogonal to the alignment question, which is pushed, in the oracle framework, over to “how did that plan get compressed, and what sort of cognition is invoved in the plan, and why does running that cognition yield good outcomes”.
I have not yet found a pivotal act that seems to me to require only shallow realtime/reactive cognition, but I endorse the exercise of searching for highly specific and implausibly concrete pivotal acts with that property.
I agree that it’s plausible chess-plans can be compressed without invoking full reasoners (and with a more general point that there are degrees of compression you can do short of full-on ‘reasoner’, and with the more specific point that I was oversimplifying in my comment). My intent with my comment was to highlight how “but my AI only generates plans” is sorta orthogonal to the alignment question, which is pushed, in the oracle framework, over to “how did that plan get compressed, and what sort of cognition is invoved in the plan, and why does running that cognition yield good outcomes”.
I have not yet found a pivotal act that seems to me to require only shallow realtime/reactive cognition, but I endorse the exercise of searching for highly specific and implausibly concrete pivotal acts with that property.