You can tell pretty easily how good research in math or physics is. But in AI safety research, you can fund people working on the wrong things for years and never know, which is exactly the problem MIRI is currently crippled by. I think OpenAI plans to get around this problem by avoiding AI safety research altogether and just building AIs instead. That initial approach seems like the best option. Even if they contribute nothing to AI safety in the near-term, they can produce enough solid, measurable results to keep the organization alive and attract the best researchers, which is half the battle.
What troubles me is that OpenAI could set a precedent for AI safety as a political issue, like global warming. You just have to read the comments on the HN article to find that people don’t don’t think they need any expertise in AI safety to have strong opinions about it. In particular, if Sam Altman and Elon Musk have some false belief about AI safety, who is going to prove it to them? You can’t just do an experiment like you can in physics. That may explain why they have gotten this far without being able to give well-thought-out answers on some important questions. What MIRI got right is that AI safety is a research problem, so only the opinions of the experts matter. While OpenAI is still working on ML/AI and producing measurable results, it might work to have the people who happened to be wealthy and influential in charge. But if they hope to contribute to AI safety, they will have to hand over control to the people with the correct opinions, and they can’t tell who those people are.
You can tell pretty easily how good research in math or physics is. But in AI safety research, you can fund people working on the wrong things for years and never know, which is exactly the problem MIRI is currently crippled by. I think OpenAI plans to get around this problem by avoiding AI safety research altogether and just building AIs instead. That initial approach seems like the best option. Even if they contribute nothing to AI safety in the near-term, they can produce enough solid, measurable results to keep the organization alive and attract the best researchers, which is half the battle.
What troubles me is that OpenAI could set a precedent for AI safety as a political issue, like global warming. You just have to read the comments on the HN article to find that people don’t don’t think they need any expertise in AI safety to have strong opinions about it. In particular, if Sam Altman and Elon Musk have some false belief about AI safety, who is going to prove it to them? You can’t just do an experiment like you can in physics. That may explain why they have gotten this far without being able to give well-thought-out answers on some important questions. What MIRI got right is that AI safety is a research problem, so only the opinions of the experts matter. While OpenAI is still working on ML/AI and producing measurable results, it might work to have the people who happened to be wealthy and influential in charge. But if they hope to contribute to AI safety, they will have to hand over control to the people with the correct opinions, and they can’t tell who those people are.
Which of the answers do you consider not well-thought-out?