There’s a simpler way to pose the problem, that I think raises the same issues: “What happens if somebody chooses to build an unfriendly AI programmed to benefit the creator at the expense of the rest of the world?”
What he’s talking about is knowledge that’s objectively harmful for someone to have.
Someone should make a list of knowledge that is objectively harmful. Could come in handy if you want to avoid running into it accidentally. Or we just ban the medium that is used to spread it, in this case natural language.
There’s a simpler way to pose the problem, that I think raises the same issues: “What happens if somebody chooses to build an unfriendly AI programmed to benefit the creator at the expense of the rest of the world?”
Nope, not like that at all. What he’s talking about is knowledge that’s objectively harmful for someone to have.
Someone should make a list of knowledge that is objectively harmful. Could come in handy if you want to avoid running into it accidentally. Or we just ban the medium that is used to spread it, in this case natural language.