and that the word ‘safe’ is not going to differentiate this because it has been taken over by ‘ethics’ people who want to actively make things less safe.
I’d like to ask, is this your interpretation of AI ethics people that they want to make things less safe actively, or did someone on AI ethics actually said that they want to make things less safe?
Ah, I meant to revise this wording a bit before posting and forgot to after this came up on Twitter. I did not mean ‘the AI ethics people actively want everyone to die’ I meant ‘the AI ethics people favor policies whose primary practical effect is to increase existential risk.’ One of which is the main thing this bill is doing.
I’d like to ask, is this your interpretation of AI ethics people that they want to make things less safe actively, or did someone on AI ethics actually said that they want to make things less safe?
Because one is far worse than the other.
Ah, I meant to revise this wording a bit before posting and forgot to after this came up on Twitter. I did not mean ‘the AI ethics people actively want everyone to die’ I meant ‘the AI ethics people favor policies whose primary practical effect is to increase existential risk.’ One of which is the main thing this bill is doing.
Then edit the post soon, because I’m kind of concerned that the statement, as was written, would imply things that I don’t think are true.