Is it a good idea to be brainstorming ways ai can escape a box in public? seems like the same kind of thing as asking for people to brainstorm security vulnerabilities in public. they shouldn’t necessarily stay private, but if we’re aiming to close them, we should have some idea what our fix process is.
I don’t think casual comments on a forum can match what is going on in professional discussions. And those professionals know to stay mum about them. Most of what is public on AGI escape is mostly for entertainment value.
Is it a good idea to be brainstorming ways ai can escape a box in public? seems like the same kind of thing as asking for people to brainstorm security vulnerabilities in public. they shouldn’t necessarily stay private, but if we’re aiming to close them, we should have some idea what our fix process is.
I don’t think casual comments on a forum can match what is going on in professional discussions. And those professionals know to stay mum about them. Most of what is public on AGI escape is mostly for entertainment value.
I’m just declaring what you’re saying you feel is safe to assume anyhow, so, yup
Hmm, I was somewhat worried about that, but there are way more dangerous things for AI to see written on the internet.
If you’re trying to create AGI by training it on a large internet crawl dataset, you have bigger problems...
To fix something, we need to know what to fix first.