Seeing “most of it doesn’t seem to me to be even trying by their own lights to engage with what look to me like the lethal problems.” makes it seem to me that you are confused. The correct lesson isn’t that they’re unwilling to deal with the serious parts, it’s that their evidence is not trivially compatible with your conclusions. In other words, that the balance of evidence they’ve seem indicates things either aren’t so serious, or that there is plenty of time. Reading the link there, it seems like you simply don’t like their solutions because you think it is much harder than they do, which should mostly lower your faith in the little bit of evidence we have on AI meaning what you think it does, rather than you putting all the weight of evidence on them not doing the important stuff. You seem very overconfident.
I personally disagree with you both on the difficulty of alignment and on how rapidly AI will become important, with most of the evidence being for the latter ending up a slow thing. Without a singularity beforehand, just increasing the amount of resources spent is quickly becoming infeasible even for the largest companies. The rate at which computing power is going down in cost is itself slowing down quite a lot despite exponential resources being spent on keeping up its scaling. Current narrow AIs need to get exponentially larger for each noticeable gain in ability, so we’re pretty much relying on algorithmic enhancements to get high-end AI soon, and at the same time needing to make it more general. Stochastic Gradient Descent is an extremely well understood algorithm, and it is unlikely there is much to be gained from simply coding it better, so we pretty much need to replace the idea of transformers to make quick progress, or possibly even neural-nets themselves. Beyond that, we need to gain a great deal more generality to make an AGI. Once we do (assuming we do), it will still need all of these impediments to just go away for there to be a quick takeoff. These aren’t proof of it not happening, but they should vastly lower your confidence.
You need to include a very large term for ‘Super-AGI will not happen for obvious reasons’ and another very large term for ‘Super-AGI doesn’t happen for reasons I am unaware of’ in the near term (also that it never will), to your currently small term for ‘we’ll figure out how to avoid misalignment.’.
Seeing “most of it doesn’t seem to me to be even trying by their own lights to engage with what look to me like the lethal problems.” makes it seem to me that you are confused. The correct lesson isn’t that they’re unwilling to deal with the serious parts, it’s that their evidence is not trivially compatible with your conclusions. In other words, that the balance of evidence they’ve seem indicates things either aren’t so serious, or that there is plenty of time. Reading the link there, it seems like you simply don’t like their solutions because you think it is much harder than they do, which should mostly lower your faith in the little bit of evidence we have on AI meaning what you think it does, rather than you putting all the weight of evidence on them not doing the important stuff. You seem very overconfident.
I personally disagree with you both on the difficulty of alignment and on how rapidly AI will become important, with most of the evidence being for the latter ending up a slow thing. Without a singularity beforehand, just increasing the amount of resources spent is quickly becoming infeasible even for the largest companies. The rate at which computing power is going down in cost is itself slowing down quite a lot despite exponential resources being spent on keeping up its scaling. Current narrow AIs need to get exponentially larger for each noticeable gain in ability, so we’re pretty much relying on algorithmic enhancements to get high-end AI soon, and at the same time needing to make it more general. Stochastic Gradient Descent is an extremely well understood algorithm, and it is unlikely there is much to be gained from simply coding it better, so we pretty much need to replace the idea of transformers to make quick progress, or possibly even neural-nets themselves. Beyond that, we need to gain a great deal more generality to make an AGI. Once we do (assuming we do), it will still need all of these impediments to just go away for there to be a quick takeoff. These aren’t proof of it not happening, but they should vastly lower your confidence.
You need to include a very large term for ‘Super-AGI will not happen for obvious reasons’ and another very large term for ‘Super-AGI doesn’t happen for reasons I am unaware of’ in the near term (also that it never will), to your currently small term for ‘we’ll figure out how to avoid misalignment.’.