The general consensus among red-teamers and safety evaluators seems to be that the currently publicly available frontier AI tools such as GPT-4 and Claude 2 display hints of the capabilities that would cause x-risk-type danger, but don’t actually have enough of them to pose any significant dangers. (They do have more minor concerns such as potential for misuse, and the sorts of biases one can already find on the Internet — both of which are pretty manageable if you’re conscientious.) It’s possible, if now unlikely, that these hints of danger could be fanned into something more dangerous by wrapping them in some sort of ingenious agentic scaffolding, but so far that doesn’t seem to have happened. So I wouldn’t worry much about using this year’s generation of frontier AI (at least in ways that many people already have that don’t involve a lot of new scaffolding), nor about anything less capable (such as any currently-available open-source models). If you wanted to be extra cautious, you could stick to last year’s frontier, such as GPT-3.5.
Turning your history off/telling them not to use your data should avoid your use contributing anything to the zetabytes of training data already on the web (avoiding ever posting anything online would similarly help). Paying for the service will help the bottom line of whichever superscaler you use, but currently it’s widely assumed that they’re selling access to their models below cost, so (at least if you use the service heavily) you’re costing them more money than you’re paying them.
My problem isn’t the danger from the tool itself, but from aiding teams/companies which develop them, and adding to the pressure to use AI tools, which will aid them even more. Edit: I see your other answer addressed this concern.
The general consensus among red-teamers and safety evaluators seems to be that the currently publicly available frontier AI tools such as GPT-4 and Claude 2 display hints of the capabilities that would cause x-risk-type danger, but don’t actually have enough of them to pose any significant dangers. (They do have more minor concerns such as potential for misuse, and the sorts of biases one can already find on the Internet — both of which are pretty manageable if you’re conscientious.) It’s possible, if now unlikely, that these hints of danger could be fanned into something more dangerous by wrapping them in some sort of ingenious agentic scaffolding, but so far that doesn’t seem to have happened. So I wouldn’t worry much about using this year’s generation of frontier AI (at least in ways that many people already have that don’t involve a lot of new scaffolding), nor about anything less capable (such as any currently-available open-source models). If you wanted to be extra cautious, you could stick to last year’s frontier, such as GPT-3.5.
Turning your history off/telling them not to use your data should avoid your use contributing anything to the zetabytes of training data already on the web (avoiding ever posting anything online would similarly help). Paying for the service will help the bottom line of whichever superscaler you use, but currently it’s widely assumed that they’re selling access to their models below cost, so (at least if you use the service heavily) you’re costing them more money than you’re paying them.
My problem isn’t the danger from the tool itself, but from aiding teams/companies which develop them, and adding to the pressure to use AI tools, which will aid them even more. Edit: I see your other answer addressed this concern.