But how will they detect the deceptive behavior itself? Will they be on the lookout for deceptive behavior and use clever techniques to detect it?
Either they’ll be on the lookout, or they’ll have some (correct) reason to expect that deception won’t happen.
what made Facebook transition into a company that takes AI safety seriously?
AI capabilities have progressed to the point where researchers think it is plausible that AI systems could actually “intentionally” deceive you. And also we have better arguments for risk. And they are being said by more prestigious people.
(I thought I was more optimistic than average on this point, but here it seems like most commenters are more optimistic than I am.)
If you think that Facebook is unlikely to do a 100x scale-up in one go, suppose that their leadership comes to believe that the scale-up would cause their revenue to increase in expectation by 10%.
You’d have to be really confident in this to not do a 10x less costly 10x scale-up first to see what happens there—I’d be surprised if you could find examples of big companies doing this. OpenAI is perhaps the company that most bets on its beliefs, and it still scaled to 13B parameters before the 175B parameters of GPT-3, even after they had a paper specifically predicting how large language models would scale with more parameters.
Also I’d be surprised if a 100x scale up to be the difference between “subhuman / can’t be deceptive” to “can cause an existential catastrophe”.
Either they’ll be on the lookout, or they’ll have some (correct) reason to expect that deception won’t happen.
AI capabilities have progressed to the point where researchers think it is plausible that AI systems could actually “intentionally” deceive you. And also we have better arguments for risk. And they are being said by more prestigious people.
(I thought I was more optimistic than average on this point, but here it seems like most commenters are more optimistic than I am.)
You’d have to be really confident in this to not do a 10x less costly 10x scale-up first to see what happens there—I’d be surprised if you could find examples of big companies doing this. OpenAI is perhaps the company that most bets on its beliefs, and it still scaled to 13B parameters before the 175B parameters of GPT-3, even after they had a paper specifically predicting how large language models would scale with more parameters.
Also I’d be surprised if a 100x scale up to be the difference between “subhuman / can’t be deceptive” to “can cause an existential catastrophe”.