That is my impression too. Which is why I don’t understand why you are complaining about censorship of ideas and wondering why EY doesn’t spend more time refuting ideas.
As I understand it, we are talking about actions that might be undertaken by an AI that you and I would call insane. The “censorship” is intended to mitigate the harm that might be done by such an AI. Since I think it possible that a future AI (particularly one built by certain people) might actually be insane, I have no problem with preemptive mitigation activities, even if the risk seems miniscule.
In other words, why make such a big deal out of it?
Hmm I haven’t. It was meant to explain where that sentence came from in my above copy & paste comment. The gist of the comment was regarding foundational evidence supporting the premise of risks from AI going FOOM.
I’ve looked at it.
That is my impression too. Which is why I don’t understand why you are complaining about censorship of ideas and wondering why EY doesn’t spend more time refuting ideas.
As I understand it, we are talking about actions that might be undertaken by an AI that you and I would call insane. The “censorship” is intended to mitigate the harm that might be done by such an AI. Since I think it possible that a future AI (particularly one built by certain people) might actually be insane, I have no problem with preemptive mitigation activities, even if the risk seems miniscule.
In other words, why make such a big deal out of it?
Having people delete your comments often rubs people up the wrong way, I find.
Hmm I haven’t. It was meant to explain where that sentence came from in my above copy & paste comment. The gist of the comment was regarding foundational evidence supporting the premise of risks from AI going FOOM.