You make a good point that Cruz et al may have different beliefs than they portray publicly. But if so, then Cruz must have had a good acting coach in late 2018.
About 70 million followers, you’re right to call me out for overstating it. But according to polling, that’s how many people believe that the 2020 election was stolen at the ballot box. So far he has lost dozens of election challenge cases, key members of his inner circle have admitted in court that there was no evidence, and he’s facing multiple sets of felony charges in multiple jurisdictions.
I think it is reasonable to call someone a devoted follower if they continue to accept his version in the face of such evidence.
On AI safety, we can mean two different things.
My concern is with the things that are likely to actually happen. Hence my focus on what is supported by tech companies, and what politicians are likely to listen to. That part I’m sure is mostly regulatory capture.
I did acknowledge, though not loudly enough, that there are people working in AI safety who truly believe in what they are doing. But to the extent that they don’t align with the vested interests, what they do will not matter. To the extent that they do align, their motivations don’t matter as much as the motivations of the vested interests. And in the meantime, I wish that they would investigate questions that I consider important.
For example, how easily can an LLM hypnotize people? Given the ability to put up images and play videos created by AI. Can it hypnotize people then? Can it implant posthypnotic suggestions? In other words, how easily can an existing social network, with existing technology, be used for mass hypnosis?
Update. I forgot to mention that Andrew Ng’s accomplishments in AI are quite impressive. Cofounder of Google Brain, taught machine learning to Sam Altman, and so on. I might wind up disagreeing with some of his positions, but I’ll generally default to trusting his thinking over mine on anything related to machine learning.
Cruz: I think a model where being a terribly good liar (whether coached, innate, or self-learned) is a prerequisite to for becoming a top cheese in US politics, fits observations well.
AI safety: I guess what could make the post easier to understand, then, is if you make it clearer (i) whether you believe AI safety is in reality no real major issue (vs. only overemphasized/abused of by the big to get regulatory advantages), and if you do, i.e. if you dismiss most AI safety concerns, (ii) whether you do that mainly for today’s AI or also for what we expect for the future.
Ng: In no way I doubt his merits as AI pioneer! That does not guarantee he has the right assessment of future dangers w.r.t. the technology at all. Incidentally, I also found some of his dismissals very lighthearted; I remember this one. On your link Google Brain founder says big tech is lying about AI extinction danger: That article quotes Ng on it being a “bad idea that AI could make us go extinct”, but it does not provide any argument supporting it. Again, I do not contest AI leaders are overemphasizing their concerns, and that they abuse of them for regulatory capture. Incentives are obviously huge. Ng might even be right with his “with the direction regulation is headed in a lot of countries, I think we’d be better off with no regulation than what we’re getting”, and that’s a hugely relevant question. I just don’t think it’s a reason to dismiss AI safety concerns more generally (and imho your otherwise valid post loses power by pushing in that direction e.g. with the Ng example).
I believe that AI safety is a real issue. There are both near term and long term issues.
I believe that the version of AI safety that will get traction is regulatory capture.
I believe that the AI safety community is too focused on what fascinating technology can do, and not enough on the human part of the equation.
On Andrew Ng, his point is that he doesn’t see how exactly AI is realistically going to kill all of us. Without a concrete argument that is worth responding to, what can he really say? I disagree with him on this, I do think that there are realistic scenarios to worry about. But I do agree with him on what is happening politically with AI safety.
You make a good point that Cruz et al may have different beliefs than they portray publicly. But if so, then Cruz must have had a good acting coach in late 2018.
About 70 million followers, you’re right to call me out for overstating it. But according to polling, that’s how many people believe that the 2020 election was stolen at the ballot box. So far he has lost dozens of election challenge cases, key members of his inner circle have admitted in court that there was no evidence, and he’s facing multiple sets of felony charges in multiple jurisdictions.
I think it is reasonable to call someone a devoted follower if they continue to accept his version in the face of such evidence.
On AI safety, we can mean two different things.
My concern is with the things that are likely to actually happen. Hence my focus on what is supported by tech companies, and what politicians are likely to listen to. That part I’m sure is mostly regulatory capture.
I did acknowledge, though not loudly enough, that there are people working in AI safety who truly believe in what they are doing. But to the extent that they don’t align with the vested interests, what they do will not matter. To the extent that they do align, their motivations don’t matter as much as the motivations of the vested interests. And in the meantime, I wish that they would investigate questions that I consider important.
For example, how easily can an LLM hypnotize people? Given the ability to put up images and play videos created by AI. Can it hypnotize people then? Can it implant posthypnotic suggestions? In other words, how easily can an existing social network, with existing technology, be used for mass hypnosis?
Update. I forgot to mention that Andrew Ng’s accomplishments in AI are quite impressive. Cofounder of Google Brain, taught machine learning to Sam Altman, and so on. I might wind up disagreeing with some of his positions, but I’ll generally default to trusting his thinking over mine on anything related to machine learning.
If you’re willing to pay, you can read a stronger version of his thoughts in Google Brain founder says big tech is lying about AI extinction danger.
Thanks!
Cruz: I think a model where being a terribly good liar (whether coached, innate, or self-learned) is a prerequisite to for becoming a top cheese in US politics, fits observations well.
Trumpeteer numbers: I’d now remove that part from my comment. You’re right. Shallowly my claim could seem substantiated by things like (Atlantic) “For many of Trump’s voters, the belief that the election was stolen is not a fully formed thought. It’s more of an attitude, or a tribal pose.”, but even there upon closer reading, it comes out: In some form, they do (or at least did) seem to believe it. Pardon my shallow remark before checking facts more carefully.
AI safety: I guess what could make the post easier to understand, then, is if you make it clearer (i) whether you believe AI safety is in reality no real major issue (vs. only overemphasized/abused of by the big to get regulatory advantages), and if you do, i.e. if you dismiss most AI safety concerns, (ii) whether you do that mainly for today’s AI or also for what we expect for the future.
Ng: In no way I doubt his merits as AI pioneer! That does not guarantee he has the right assessment of future dangers w.r.t. the technology at all. Incidentally, I also found some of his dismissals very lighthearted; I remember this one. On your link Google Brain founder says big tech is lying about AI extinction danger: That article quotes Ng on it being a “bad idea that AI could make us go extinct”, but it does not provide any argument supporting it. Again, I do not contest AI leaders are overemphasizing their concerns, and that they abuse of them for regulatory capture. Incentives are obviously huge. Ng might even be right with his “with the direction regulation is headed in a lot of countries, I think we’d be better off with no regulation than what we’re getting”, and that’s a hugely relevant question. I just don’t think it’s a reason to dismiss AI safety concerns more generally (and imho your otherwise valid post loses power by pushing in that direction e.g. with the Ng example).
I believe that AI safety is a real issue. There are both near term and long term issues.
I believe that the version of AI safety that will get traction is regulatory capture.
I believe that the AI safety community is too focused on what fascinating technology can do, and not enough on the human part of the equation.
On Andrew Ng, his point is that he doesn’t see how exactly AI is realistically going to kill all of us. Without a concrete argument that is worth responding to, what can he really say? I disagree with him on this, I do think that there are realistic scenarios to worry about. But I do agree with him on what is happening politically with AI safety.
Thanks, yes, sadly seems all very plausible to me too.