Yeah, in cases where the human is very clearly trying to ‘trick’ the AI into saying something problematic, I don’t see why people would be particularly upset with the AI or its creators. (It’d be a bit like writing some hate speech into Word, taking a screenshot and then using that to gin up outrage at Microsoft.)
If the instructions for doing dangerous or illegal things were any better than could be easily found with a google search, that would be another matter; but at first glance they all seem the same or worse.
eidt: Likewise, if it was writing superhumanly persuasive political rhetoric then that would be a serious issue. But that too seems like something to worry about with respect to future iterations, not this one. So I wouldn’t assume that OpenAI’s decision to release ChatGPT implies they believed they had it securely locked down.
Yeah, in cases where the human is very clearly trying to ‘trick’ the AI into saying something problematic, I don’t see why people would be particularly upset with the AI or its creators. (It’d be a bit like writing some hate speech into Word, taking a screenshot and then using that to gin up outrage at Microsoft.)
If the instructions for doing dangerous or illegal things were any better than could be easily found with a google search, that would be another matter; but at first glance they all seem the same or worse.
eidt: Likewise, if it was writing superhumanly persuasive political rhetoric then that would be a serious issue. But that too seems like something to worry about with respect to future iterations, not this one. So I wouldn’t assume that OpenAI’s decision to release ChatGPT implies they believed they had it securely locked down.