Jesus fucking Christ.
The ones they already gave it access to include Wolfram Alpha (so it can now do math), executing code (?!?), accessing internet search data, accessing your email and Todo list and drive, interacting with financial services (Klarna), ordering products, interpreting and generating images… This is absolutely crazy. It doesn’t even have to get more sentient, malicious or agentic for this to be insanely risky. It still hallucinates 15 % of the time. I have seen no stats for moral misjudgements or destabilising, but would guess similar ballpark. What it could fuck up due to sheer incompetence alone. They are making it too integrated to shut down if risks become impossible to deny (on purpose?). The security vulnerabilities for private data and finances are bananas. The existential risk is… I don’t even. I’m in shock. Like, boxing an AI was already a risky proposal due to it escaping or being let out. But just dumping it out of the box from the start? What the flying fuck? I figured we would fuck up, and that setting things up so a malicious and clever AI could not beat us would likely fail, but just handing everything over on a platter?
I’m one of the most optimistic and hopeful people in this community, and with one of the friendliest and most appreciative outlooks on ChatGPT and OpenAI. But this is insane.
And keep in mind… It only needs permission, that is it. People have given it shoddy plugin code, and chatGPT fucking debugged it with them. Like, wasn’t given a working version, but updated the version it was given to make it with. It codes reasonably well, and can read the internet, and conceal plugin use from the user. This is grotesquely overpowered.
Jesus fucking Christ. The ones they already gave it access to include Wolfram Alpha (so it can now do math), executing code (?!?), accessing internet search data, accessing your email and Todo list and drive, interacting with financial services (Klarna), ordering products, interpreting and generating images… This is absolutely crazy. It doesn’t even have to get more sentient, malicious or agentic for this to be insanely risky. It still hallucinates 15 % of the time. I have seen no stats for moral misjudgements or destabilising, but would guess similar ballpark. What it could fuck up due to sheer incompetence alone. They are making it too integrated to shut down if risks become impossible to deny (on purpose?). The security vulnerabilities for private data and finances are bananas. The existential risk is… I don’t even. I’m in shock. Like, boxing an AI was already a risky proposal due to it escaping or being let out. But just dumping it out of the box from the start? What the flying fuck? I figured we would fuck up, and that setting things up so a malicious and clever AI could not beat us would likely fail, but just handing everything over on a platter? I’m one of the most optimistic and hopeful people in this community, and with one of the friendliest and most appreciative outlooks on ChatGPT and OpenAI. But this is insane. And keep in mind… It only needs permission, that is it. People have given it shoddy plugin code, and chatGPT fucking debugged it with them. Like, wasn’t given a working version, but updated the version it was given to make it with. It codes reasonably well, and can read the internet, and conceal plugin use from the user. This is grotesquely overpowered.