I do think Meta AI is a bad actor in the sense that it puts too little attention and resources on existential risks from AI
The following are my own opinions. I understand yours are just as valid and just as well thought-out and researched.
I find that the credible experts who know the most about AI are worried the least about Hollywood-style existential threats. I am personally more worried about ID theft and Russian cyber hackers making use of AI—and any other computer system attached to wealth and identity—than I am about AI. Why worry so much about AI systems that have no proven motive to harm people when we already have so much proof of other human beings demonstrating exactly that? We have such a long way to go yet before AGI, (whatever that actually means) and we will get much value out of the research along the way.
Regarding the worries about deception: I think we’ve seen the risk of deception grow more from the development of ChatGPT and deep-fake models that are able to convincingly fabricate information and scenes, than we’ve seen in Meta’s Cicero. Reason? Same as above: human bad actors can use those tools to their advantage at other peoples’ expense. Cicero was designed to play a game in which deception is a byproduct of game-play. If you take it out of context, there’s no deception capabilities.
The following are my own opinions. I understand yours are just as valid and just as well thought-out and researched.
I find that the credible experts who know the most about AI are worried the least about Hollywood-style existential threats. I am personally more worried about ID theft and Russian cyber hackers making use of AI—and any other computer system attached to wealth and identity—than I am about AI. Why worry so much about AI systems that have no proven motive to harm people when we already have so much proof of other human beings demonstrating exactly that? We have such a long way to go yet before AGI, (whatever that actually means) and we will get much value out of the research along the way.
Regarding the worries about deception: I think we’ve seen the risk of deception grow more from the development of ChatGPT and deep-fake models that are able to convincingly fabricate information and scenes, than we’ve seen in Meta’s Cicero. Reason? Same as above: human bad actors can use those tools to their advantage at other peoples’ expense. Cicero was designed to play a game in which deception is a byproduct of game-play. If you take it out of context, there’s no deception capabilities.