If I was still a computer security engineer and had never found LessWrong, I’d probably be low key hyped about all of the new classes of prompt injection and social engineering bugs that ChatGPT plugins are going to spawn.
Injections don’t deal with the model itself, it would be just like any other input prompt security protocol. Heck, I surely hope ChatGPT doesn’t execute code with root permission.
I didn’t know you could do that. Truly dangerous times we live in. I’m serious. More dangerous because of the hype. Hype means more unqualified participation.
If I was still a computer security engineer and had never found LessWrong, I’d probably be low key hyped about all of the new classes of prompt injection and social engineering bugs that ChatGPT plugins are going to spawn.
Injections don’t deal with the model itself, it would be just like any other input prompt security protocol. Heck, I surely hope ChatGPT doesn’t execute code with root permission.
If someone is using a GPTv4 plugin to read and respond to their email, then a prompt injection would mean being able to read other emails
I didn’t know you could do that. Truly dangerous times we live in. I’m serious. More dangerous because of the hype. Hype means more unqualified participation.