I’m confused. How do you explain the fact that we don’t currently have human-level AI—e.g., AI that is as good as Ed Witten at publishing original string theory papers, or an AI that can earn a billion dollars with no human intervention whatsoever? Or do you think we do have such AIs already? (If we don’t already have such AIs, then what do you mean by “the horse is no longer in the stable”?)
I think that coordination problems are the only thing preventing that AI from existing, I guess. And coordination problems are solvable.
If I had to guess (which I don’t, but I like to make guesses), but if I had to guess, I would say that the NSA is likely going to be the first entity to achieve an aligned superintelligence. And I don’t see how they can be more than a day behind OpenAI with all their zero-days. So, you know. That horse is on its way out the stable, and the people who want the horse to go somewhere particular have a lot of guns.
I don’t understand what you’re trying to say. What exactly are the “coordination problems” that prevent true human-level AI from having already been created last year?
People squabbling about who gets to own it, as exemplified by the OpenAI board chaos.
People afraid of what it might do once it exists, as exemplified by the alignment community’s paranoia and infohoarding.
If everyone just released all the knowledge available today (if we were all infopandoras), I predict we would have human-level AI within a matter of months. Do we want that? It is very unclear what people want, and they all disagree. Again, coordination problems.
I’m confused. How do you explain the fact that we don’t currently have human-level AI—e.g., AI that is as good as Ed Witten at publishing original string theory papers, or an AI that can earn a billion dollars with no human intervention whatsoever? Or do you think we do have such AIs already? (If we don’t already have such AIs, then what do you mean by “the horse is no longer in the stable”?)
I think that coordination problems are the only thing preventing that AI from existing, I guess. And coordination problems are solvable.
If I had to guess (which I don’t, but I like to make guesses), but if I had to guess, I would say that the NSA is likely going to be the first entity to achieve an aligned superintelligence. And I don’t see how they can be more than a day behind OpenAI with all their zero-days. So, you know. That horse is on its way out the stable, and the people who want the horse to go somewhere particular have a lot of guns.
I don’t understand what you’re trying to say. What exactly are the “coordination problems” that prevent true human-level AI from having already been created last year?
People squabbling about who gets to own it, as exemplified by the OpenAI board chaos.
People afraid of what it might do once it exists, as exemplified by the alignment community’s paranoia and infohoarding.
If everyone just released all the knowledge available today (if we were all infopandoras), I predict we would have human-level AI within a matter of months. Do we want that? It is very unclear what people want, and they all disagree. Again, coordination problems.