I agree, though I think it would be a very ridiculous own-goal if e.g. GPT-4o decided to block a whistleblowing report about OpenAI because it was trained to serve OpenAI’s interests. I think any model used by this kind of whistleblowing tool should be open-source (nothing fancy / more dangerous than what’s already out there), run locally by the operators of the tool, and tested to make sure it doesn’t block legitimate posts.
I agree, though I think it would be a very ridiculous own-goal if e.g. GPT-4o decided to block a whistleblowing report about OpenAI because it was trained to serve OpenAI’s interests. I think any model used by this kind of whistleblowing tool should be open-source (nothing fancy / more dangerous than what’s already out there), run locally by the operators of the tool, and tested to make sure it doesn’t block legitimate posts.
I can also unblock it manually at any point, and keep the full uncensored log of posts on a blockchain