Cruz: I think a model where being a terribly good liar (whether coached, innate, or self-learned) is a prerequisite to for becoming a top cheese in US politics, fits observations well.
AI safety: I guess what could make the post easier to understand, then, is if you make it clearer (i) whether you believe AI safety is in reality no real major issue (vs. only overemphasized/abused of by the big to get regulatory advantages), and if you do, i.e. if you dismiss most AI safety concerns, (ii) whether you do that mainly for today’s AI or also for what we expect for the future.
Ng: In no way I doubt his merits as AI pioneer! That does not guarantee he has the right assessment of future dangers w.r.t. the technology at all. Incidentally, I also found some of his dismissals very lighthearted; I remember this one. On your link Google Brain founder says big tech is lying about AI extinction danger: That article quotes Ng on it being a “bad idea that AI could make us go extinct”, but it does not provide any argument supporting it. Again, I do not contest AI leaders are overemphasizing their concerns, and that they abuse of them for regulatory capture. Incentives are obviously huge. Ng might even be right with his “with the direction regulation is headed in a lot of countries, I think we’d be better off with no regulation than what we’re getting”, and that’s a hugely relevant question. I just don’t think it’s a reason to dismiss AI safety concerns more generally (and imho your otherwise valid post loses power by pushing in that direction e.g. with the Ng example).
I believe that AI safety is a real issue. There are both near term and long term issues.
I believe that the version of AI safety that will get traction is regulatory capture.
I believe that the AI safety community is too focused on what fascinating technology can do, and not enough on the human part of the equation.
On Andrew Ng, his point is that he doesn’t see how exactly AI is realistically going to kill all of us. Without a concrete argument that is worth responding to, what can he really say? I disagree with him on this, I do think that there are realistic scenarios to worry about. But I do agree with him on what is happening politically with AI safety.
Thanks!
Cruz: I think a model where being a terribly good liar (whether coached, innate, or self-learned) is a prerequisite to for becoming a top cheese in US politics, fits observations well.
Trumpeteer numbers: I’d now remove that part from my comment. You’re right. Shallowly my claim could seem substantiated by things like (Atlantic) “For many of Trump’s voters, the belief that the election was stolen is not a fully formed thought. It’s more of an attitude, or a tribal pose.”, but even there upon closer reading, it comes out: In some form, they do (or at least did) seem to believe it. Pardon my shallow remark before checking facts more carefully.
AI safety: I guess what could make the post easier to understand, then, is if you make it clearer (i) whether you believe AI safety is in reality no real major issue (vs. only overemphasized/abused of by the big to get regulatory advantages), and if you do, i.e. if you dismiss most AI safety concerns, (ii) whether you do that mainly for today’s AI or also for what we expect for the future.
Ng: In no way I doubt his merits as AI pioneer! That does not guarantee he has the right assessment of future dangers w.r.t. the technology at all. Incidentally, I also found some of his dismissals very lighthearted; I remember this one. On your link Google Brain founder says big tech is lying about AI extinction danger: That article quotes Ng on it being a “bad idea that AI could make us go extinct”, but it does not provide any argument supporting it. Again, I do not contest AI leaders are overemphasizing their concerns, and that they abuse of them for regulatory capture. Incentives are obviously huge. Ng might even be right with his “with the direction regulation is headed in a lot of countries, I think we’d be better off with no regulation than what we’re getting”, and that’s a hugely relevant question. I just don’t think it’s a reason to dismiss AI safety concerns more generally (and imho your otherwise valid post loses power by pushing in that direction e.g. with the Ng example).
I believe that AI safety is a real issue. There are both near term and long term issues.
I believe that the version of AI safety that will get traction is regulatory capture.
I believe that the AI safety community is too focused on what fascinating technology can do, and not enough on the human part of the equation.
On Andrew Ng, his point is that he doesn’t see how exactly AI is realistically going to kill all of us. Without a concrete argument that is worth responding to, what can he really say? I disagree with him on this, I do think that there are realistic scenarios to worry about. But I do agree with him on what is happening politically with AI safety.
Thanks, yes, sadly seems all very plausible to me too.