I’m trying to express that typically people think about systems with very high cognitive power (relative to humans), but it could be interesting/useful to consider what very low cognitive power systems are like. Looking at such extreme cases can inform things such as the type signature of intelligence/agency/etc. The post is me trying to think through this and noticing that low cognitive power systems are hard to characterize, e.g. give a superintelligence the right goal and it can behave like a low cognitive power system.
Thanks Dagon, I appreciate the concrete feedback.
I’m trying to express that typically people think about systems with very high cognitive power (relative to humans), but it could be interesting/useful to consider what very low cognitive power systems are like. Looking at such extreme cases can inform things such as the type signature of intelligence/agency/etc. The post is me trying to think through this and noticing that low cognitive power systems are hard to characterize, e.g. give a superintelligence the right goal and it can behave like a low cognitive power system.