Google is the prime example of a tech company that values ethics, or it was in the recent past. I have much less faith in Amazon or Microsoft or Facebook or the US federal government or the Chinese government that they would even make gestures toward responsibility in AI.
I work for Microsoft, though not in AI/ML. My impression is that we do care deeply about using AI responsibly, but not necessarily about the kinds of alignment issues that people on LessWrong are most interested in.
Microsoft’s leadership seems to be mostly concerned that AI will be biased in various ways, or will make mistakes when it’s deployed in the real world. There are also privacy concerns around how data is being collected (though I suspect that’s also an opportunistic way to attack Google and Facebook, since they get most of the revenue for personalized ads).
The LessWrong community seems to be more concerned that AI will be too good at achieving its objectives, and we’ll realize when it’s too late that those aren’t the actual objectives we want (e.g., Paperclip Maximizer).
To me those seem like mostly opposite concerns. That’s why I’m actually somewhat skeptical of your hope that ethical AI teams would push a solution for the alignment issue. The work might overlap in some ways, but I think the main goals are different.
I think this makes sense, but I disagree with it as a factual assessment.
In particular I think “will make mistakes” is actually an example of some combination of inner and outer alignment problems that are exactly the focus of LW-style alignment.
I also tend to think that the failure to make this connection is perhaps the biggest single problem in both ethical AI and AI alignment spaces, and I continue to be confused about why no one else seems to take this perspective.
I work for Microsoft, though not in AI/ML. My impression is that we do care deeply about using AI responsibly, but not necessarily about the kinds of alignment issues that people on LessWrong are most interested in.
Microsoft’s leadership seems to be mostly concerned that AI will be biased in various ways, or will make mistakes when it’s deployed in the real world. There are also privacy concerns around how data is being collected (though I suspect that’s also an opportunistic way to attack Google and Facebook, since they get most of the revenue for personalized ads).
The LessWrong community seems to be more concerned that AI will be too good at achieving its objectives, and we’ll realize when it’s too late that those aren’t the actual objectives we want (e.g., Paperclip Maximizer).
To me those seem like mostly opposite concerns. That’s why I’m actually somewhat skeptical of your hope that ethical AI teams would push a solution for the alignment issue. The work might overlap in some ways, but I think the main goals are different.
Does that make sense?
I think this makes sense, but I disagree with it as a factual assessment.
In particular I think “will make mistakes” is actually an example of some combination of inner and outer alignment problems that are exactly the focus of LW-style alignment.
I also tend to think that the failure to make this connection is perhaps the biggest single problem in both ethical AI and AI alignment spaces, and I continue to be confused about why no one else seems to take this perspective.
Necroing.
“This perspective” being smuggling in LW alignment into corps through expanding the fear of the AI “making mistakes” to include our fears?