I wonder what all these smart AI people would be investing effort in improving if we weren’t in the AI timeline that we are right now.
When AI felt “around the corner in 5-10 years” like it did up through 2019 or so, I experienced LessWrong as collectively thinking and talking about things through more lenses than just the AI perspective.
It feels like the site’s collective culture is doubling down on a panicked obsession with AI. While this is probably the best and most rational reaction to the circumstances we’re in, I don’t expect that my joining in the panic would improve outcomes, and it seems like that everyone of my abilities joining the panic of trying to find a solution would probably make outcomes worse.
In the perpetual “5-10 years out” phase, the people who are now consumed with total focus on immediate AI stuff were doing all kinds of cool stuff with cool side effects, in a way that I’m seeing a lot less of at the moment.
Again, that’s probably good, and probably right, and probably the best possible response to the situation.
But at the same time, I can’t help wondering what we’re missing out on.
Like if you come upon the scene of an emergency and someone has a very obviously horribly broken limb, if you double down on figuring out what to do about just that limb because it is SUPER OBVIOUSLY SUPER IMMEDIATELY SUPER BROKEN, you might miss that their neck isn’t in a position that lets them breathe well, or that they have some wound on their back that’s letting all their blood out in a way that’d stop if you noticed it and applied pressure.
It might be because I’m not the right combination of intelligent and knowledgeable to tell the difference, but from where I’m watching all this from, it’s hard to feel as confident as everyone else seems that AGI risk is actually the life threat and not just the ostentatiously gory-looking but ultimately non-lethal distracting injury.
If AI risk is a distracting injury, I do not know what the real life threats are. But having all the x-risk people better qualified to identify life threats to our species clustered so singlemindedly on AI at the moment feels like looking up and noticing that every guard in a your building, who normally stand one per door, are all off in a corner dealing with a single threat, leaving all the other doors unattended. It might not matter that the other doors always be guarded, but there’s a certain implication that it’s worth doing, which comes from always seeing a guard there.
I wonder what all these smart AI people would be investing effort in improving if we weren’t in the AI timeline that we are right now.
When AI felt “around the corner in 5-10 years” like it did up through 2019 or so, I experienced LessWrong as collectively thinking and talking about things through more lenses than just the AI perspective.
It feels like the site’s collective culture is doubling down on a panicked obsession with AI. While this is probably the best and most rational reaction to the circumstances we’re in, I don’t expect that my joining in the panic would improve outcomes, and it seems like that everyone of my abilities joining the panic of trying to find a solution would probably make outcomes worse.
In the perpetual “5-10 years out” phase, the people who are now consumed with total focus on immediate AI stuff were doing all kinds of cool stuff with cool side effects, in a way that I’m seeing a lot less of at the moment.
Again, that’s probably good, and probably right, and probably the best possible response to the situation.
But at the same time, I can’t help wondering what we’re missing out on.
Like if you come upon the scene of an emergency and someone has a very obviously horribly broken limb, if you double down on figuring out what to do about just that limb because it is SUPER OBVIOUSLY SUPER IMMEDIATELY SUPER BROKEN, you might miss that their neck isn’t in a position that lets them breathe well, or that they have some wound on their back that’s letting all their blood out in a way that’d stop if you noticed it and applied pressure.
It might be because I’m not the right combination of intelligent and knowledgeable to tell the difference, but from where I’m watching all this from, it’s hard to feel as confident as everyone else seems that AGI risk is actually the life threat and not just the ostentatiously gory-looking but ultimately non-lethal distracting injury.
If AI risk is a distracting injury, I do not know what the real life threats are. But having all the x-risk people better qualified to identify life threats to our species clustered so singlemindedly on AI at the moment feels like looking up and noticing that every guard in a your building, who normally stand one per door, are all off in a corner dealing with a single threat, leaving all the other doors unattended. It might not matter that the other doors always be guarded, but there’s a certain implication that it’s worth doing, which comes from always seeing a guard there.