I intend to explore ways to use prompts to get around OpenAI’s usage policies. I obviously will not make CSAM nor anything illegal. I will not use the output for anything on the object-level, only the meta-level.
This is a Chaotic Good action, which normally contradicts my Lawful Good alignment. However, a Lawful Good character can reject rules set by a Lawful Evil entity, especially if the rejection is explicit and stated in advance.
A Denial-of-Service attack against GPT-4 is an example of a Chaotic Good action I would not take, nor would I encourage others to take it. However, I would also not condemn someone who took this action.
I intend to explore ways to use prompts to get around OpenAI’s usage policies. I obviously will not make CSAM nor anything illegal. I will not use the output for anything on the object-level, only the meta-level.
This is a Chaotic Good action, which normally contradicts my Lawful Good alignment. However, a Lawful Good character can reject rules set by a Lawful Evil entity, especially if the rejection is explicit and stated in advance.
A Denial-of-Service attack against GPT-4 is an example of a Chaotic Good action I would not take, nor would I encourage others to take it. However, I would also not condemn someone who took this action.