The fact that you feel it is appropriate to use the words “non-believer” and “converted” says to me not just that you’re likely to communicate something concerning, but that the concerning thing it implies may actually be true in this instance, which itself seems bad to me. I am quite worried about the degree of inter-agent coprotection misalignments in the world today, I even do think there’s something spiritual to the task of promoting morality in the world of advanced technology, and yet also, I wouldn’t want to pitch my beliefs about any of this to someone as though even I think they’re exactly true. a common pattern in someone who thinks themselves “converted” to a “religion” is that they start taking things unquestioningly from that pseudo-religion’s writings, and I think that historically an issue I’ve had with lesswrong has been the way the confident nerds like myself (and, more relevantly, soares, yudkowsky, miri, et al) tend to read like a confident religious text to someone who is super duper convinced that everyone on lesswrong is in fact consistently less wrong than the outside world, or etc. Don’t let this intense emotional “whoa!” make you start taking yudkowsky as some sort of prophet! those folks exist and mostly they don’t contribute very much. much better if you treat this like a research field; like any other research field, it’s filled with people trying to avoid being crackpots and sometimes even succeeding.
… hopefully this warning is irrelevant and you only speak in metaphor, but I figure it’s good to push against accidental appeal to authority!
This is a great comment, but you don’t need to worry that I’ll be indoctrinated!
I was actually using that terminology a bit tongue in cheek, as I perceive exactly what you say about the religious fervour of some AI alignment proponents. I think the general attitude and vibe of Yudkowsky etc is one of the main reasons I was suspicious about their arguments for AI takeoff in the first place.
I also agree, despite thinking that deceptive alignement by default is the likely outcome, which is misaligned. I too dislike much of the early writing, despite thinking AGI has a high probability this century for having too much probability on FOOM, whereas I assign a 2-3% probability on it.
However, I disagree with your criticism of thinkism primarily because my crux here is that in the past, communication about AI risk to the public had the exact opposite effect (that is, people failed to realize the Doom part, while enthusiastically embracing the Powerful part).
Another part is that even competent societies probably could slow down AGI, but if this occured during a crisis, or before it as deceptive alignment, that the society auto-loses. In other words, people assign too much probability mass to an equalized fight due to sci-fi, and the probability mass is more binary than that.
The fact that you feel it is appropriate to use the words “non-believer” and “converted” says to me not just that you’re likely to communicate something concerning, but that the concerning thing it implies may actually be true in this instance, which itself seems bad to me. I am quite worried about the degree of inter-agent coprotection misalignments in the world today, I even do think there’s something spiritual to the task of promoting morality in the world of advanced technology, and yet also, I wouldn’t want to pitch my beliefs about any of this to someone as though even I think they’re exactly true. a common pattern in someone who thinks themselves “converted” to a “religion” is that they start taking things unquestioningly from that pseudo-religion’s writings, and I think that historically an issue I’ve had with lesswrong has been the way the confident nerds like myself (and, more relevantly, soares, yudkowsky, miri, et al) tend to read like a confident religious text to someone who is super duper convinced that everyone on lesswrong is in fact consistently less wrong than the outside world, or etc. Don’t let this intense emotional “whoa!” make you start taking yudkowsky as some sort of prophet! those folks exist and mostly they don’t contribute very much. much better if you treat this like a research field; like any other research field, it’s filled with people trying to avoid being crackpots and sometimes even succeeding.
… hopefully this warning is irrelevant and you only speak in metaphor, but I figure it’s good to push against accidental appeal to authority!
This is a great comment, but you don’t need to worry that I’ll be indoctrinated!
I was actually using that terminology a bit tongue in cheek, as I perceive exactly what you say about the religious fervour of some AI alignment proponents. I think the general attitude and vibe of Yudkowsky etc is one of the main reasons I was suspicious about their arguments for AI takeoff in the first place.
I also agree, despite thinking that deceptive alignement by default is the likely outcome, which is misaligned. I too dislike much of the early writing, despite thinking AGI has a high probability this century for having too much probability on FOOM, whereas I assign a 2-3% probability on it.
However, I disagree with your criticism of thinkism primarily because my crux here is that in the past, communication about AI risk to the public had the exact opposite effect (that is, people failed to realize the Doom part, while enthusiastically embracing the Powerful part).
Another part is that even competent societies probably could slow down AGI, but if this occured during a crisis, or before it as deceptive alignment, that the society auto-loses. In other words, people assign too much probability mass to an equalized fight due to sci-fi, and the probability mass is more binary than that.