Understood, and agreed, but I’m still left wondering about my question as it pertains to the first sigmoidal curve that shows STEM-capable AGI. Not trying to be nitpicky, just wondering how we should reason about the likelihood that the plateau of that first curve is not already far above the current limit of human capability.
A reason to think so may be something to do with irreducible complexity making things very hard for us at around the same level that it would make them hard for a (first-gen) AGI. But a reason to think the opposite would be that we have line of sight to a bunch of amazing tech already, it’s just a question of allocating the resources to support sufficiently many smart people working out the details.
Another reason to think the opposite is that having a system that’s (in some sense) directly optimized to be intelligent might just have a plateau drawn from a higher-meaned distribution than one that’s optimized for fitness, and develops intelligence as a useful tool in that direction, since the pressure-on-intelligence for that sort of caps out at whatever it takes to dominate your immediate environment.
Understood, and agreed, but I’m still left wondering about my question as it pertains to the first sigmoidal curve that shows STEM-capable AGI. Not trying to be nitpicky, just wondering how we should reason about the likelihood that the plateau of that first curve is not already far above the current limit of human capability.
A reason to think so may be something to do with irreducible complexity making things very hard for us at around the same level that it would make them hard for a (first-gen) AGI. But a reason to think the opposite would be that we have line of sight to a bunch of amazing tech already, it’s just a question of allocating the resources to support sufficiently many smart people working out the details.
Another reason to think the opposite is that having a system that’s (in some sense) directly optimized to be intelligent might just have a plateau drawn from a higher-meaned distribution than one that’s optimized for fitness, and develops intelligence as a useful tool in that direction, since the pressure-on-intelligence for that sort of caps out at whatever it takes to dominate your immediate environment.