And to answer the obvious subtext, what if AI is a dangerous technology. This is the one thing that is agreed upon by all members of the alignment community, since that’s what it means to be a member of the alignment community.
What’s the rate-limiting step for AI? Data, compute, algorithms: all those barn doors have been open for all of human history, and the horse is no longer in the stable as far as I can tell.
The only rate-limiting step left is public trust that AI has any idea what it’s talking about. So that’s the resource we need the government to limit access to. This is what I plan to do with Peacecraft.ai. It’s also the motivation behind my post on the Snuggle/Date/Slap protocol, another one that LessWrong hated.
I’m confused. How do you explain the fact that we don’t currently have human-level AI—e.g., AI that is as good as Ed Witten at publishing original string theory papers, or an AI that can earn a billion dollars with no human intervention whatsoever? Or do you think we do have such AIs already? (If we don’t already have such AIs, then what do you mean by “the horse is no longer in the stable”?)
I think that coordination problems are the only thing preventing that AI from existing, I guess. And coordination problems are solvable.
If I had to guess (which I don’t, but I like to make guesses), but if I had to guess, I would say that the NSA is likely going to be the first entity to achieve an aligned superintelligence. And I don’t see how they can be more than a day behind OpenAI with all their zero-days. So, you know. That horse is on its way out the stable, and the people who want the horse to go somewhere particular have a lot of guns.
I don’t understand what you’re trying to say. What exactly are the “coordination problems” that prevent true human-level AI from having already been created last year?
People squabbling about who gets to own it, as exemplified by the OpenAI board chaos.
People afraid of what it might do once it exists, as exemplified by the alignment community’s paranoia and infohoarding.
If everyone just released all the knowledge available today (if we were all infopandoras), I predict we would have human-level AI within a matter of months. Do we want that? It is very unclear what people want, and they all disagree. Again, coordination problems.
And to answer the obvious subtext, what if AI is a dangerous technology. This is the one thing that is agreed upon by all members of the alignment community, since that’s what it means to be a member of the alignment community.
What’s the rate-limiting step for AI? Data, compute, algorithms: all those barn doors have been open for all of human history, and the horse is no longer in the stable as far as I can tell.
The only rate-limiting step left is public trust that AI has any idea what it’s talking about. So that’s the resource we need the government to limit access to. This is what I plan to do with Peacecraft.ai. It’s also the motivation behind my post on the Snuggle/Date/Slap protocol, another one that LessWrong hated.
I’m confused. How do you explain the fact that we don’t currently have human-level AI—e.g., AI that is as good as Ed Witten at publishing original string theory papers, or an AI that can earn a billion dollars with no human intervention whatsoever? Or do you think we do have such AIs already? (If we don’t already have such AIs, then what do you mean by “the horse is no longer in the stable”?)
I think that coordination problems are the only thing preventing that AI from existing, I guess. And coordination problems are solvable.
If I had to guess (which I don’t, but I like to make guesses), but if I had to guess, I would say that the NSA is likely going to be the first entity to achieve an aligned superintelligence. And I don’t see how they can be more than a day behind OpenAI with all their zero-days. So, you know. That horse is on its way out the stable, and the people who want the horse to go somewhere particular have a lot of guns.
I don’t understand what you’re trying to say. What exactly are the “coordination problems” that prevent true human-level AI from having already been created last year?
People squabbling about who gets to own it, as exemplified by the OpenAI board chaos.
People afraid of what it might do once it exists, as exemplified by the alignment community’s paranoia and infohoarding.
If everyone just released all the knowledge available today (if we were all infopandoras), I predict we would have human-level AI within a matter of months. Do we want that? It is very unclear what people want, and they all disagree. Again, coordination problems.