In my opinion, the best of proposed solutions for AI safety problem is to make the AI number 1, to tell him that we are going to create another AI (number 2) and ask AI number 1 to tell us how to ensure friendliness and safety of AI number 2, and how to ensure that unsafe AI is not created. This solution has its chances to fail, but still in my opinion it’s much better than any other proposed solution. What do you think?
In my opinion, the best of proposed solutions for AI safety problem is to make the AI number 1, to tell him that we are going to create another AI (number 2) and ask AI number 1 to tell us how to ensure friendliness and safety of AI number 2, and how to ensure that unsafe AI is not created. This solution has its chances to fail, but still in my opinion it’s much better than any other proposed solution. What do you think?
If AI 1 cannot be trusted, any AI it tells us how to build cannot be trusted.