The comment I was responding to also didn’t offer serious relevant arguments.
And it didn’t label the position it was arguing against as “insane”, so this is also entirely off-topic.
It would be ideal for users to always describe why they have reached the conclusions they have, but that is a fabricated option which does not take into account the basic observation that requiring such explanations creates such a tremendous dis-incentive to commenting that it would drastically reduce the quantity of useful contributions in the community, thus making things worse off than they were before.
So the compromise we reach is one in which users can state their conclusions in a relatively neutral manner that does not poison the discourse that comes afterwards, and then if another user has a question or a disagreement about this matter, later on they can then have a regular, non-Demon Thread discussion about it in which they explain their models and the evidence they had to reach their positions.
I think you are also expressing high emotive confidence in your comments. You are presenting a case, and your expressed confidence slightly lower, but still elevated.
I agree[1], and I think it is entirely appropriate to do so, given that I have given some explanations of the mental models behind my positions on these matters.
For clarity, I’ll summarize my conclusion here, on the basis of what I have explained before (1, 2, 3):
It is fine[2] to label opinions you disagree with as “insane”.
It is fine to give your conclusions without explaining the reasons behind your positions.[3]
It is not fine to do 1 and 2 at the same time.
With regards to your “taboo off-topic” reaction, what I mean by “off-topic” in this case is “irrelevant to the discussion at hand, by focusing on the wrong level of abstraction (meta-level norms vs object-level discourse) and by attempting to say the other person behaved similarly, which is incorrect as a factual matter (see the distinction between points 2 and 3 above), but more importantly, immaterial to the topic at hand even if true”.
The comment I was responding to also didn’t offer serious relevant arguments.
I’m time-bottlenecked now, but I’ll give one example. Consider the Natural Abstraction Hypothesis (NAH) agenda (which, fwiw, I think is an example of considerably-better-than-average work on trying to solve the problem from scratch). I’d argue that even for someone interested in this agenda: 1. most of the relevant work has come (and will keep coming) from outside the LW community (see e.g. The Platonic Representation Hypothesis and compare the literature reviewed there with NAH-related work on LW); 2. (given the previous point) the typical AI safety researcher interested in NAH would do better to spend most of their time (at least at the very beginning) looking at potentially relevant literature outside LW, rather than either trying to start from scratch, or mostly looking at LW literature.
considerably-better-than-average work on trying to solve the problem from scratch
It’s considerably better than average but is a drop in the bucket and is probably mostly wasted motion. And it’s a pretty noncentral example of trying to solve the problem from scratch. I think most people reading this comment just don’t even know what that would look like.
even for someone interested in this agenda
At a glance, this comment seems like it might be part of a pretty strong case that [the concrete ML-related implications of NAH] are much better investigated by the ML community compared to LW alignment people. I doubt that the philosophically more interesting aspects of Wentworth’s perspectives relating to NAH are better served by looking at ML stuff, compared to trying from scratch or looking at Wentworth’s and related LW-ish writing. (I’m unsure about the mathematically interesting aspects; the alternative wouldn’t be in the ML community but would be in the mathematical community.)
And most importantly “someone interested in this agenda” is already a somewhat nonsensical or question-begging conditional. You brought up “AI safety research” specifically, and by that term you are morally obliged to mean [the field of study aimed at figuring out how to make cognitive systems that are more capable than humanity and also serve human value]. That pursuit is better served by trying from scratch. (Yes, I still haven’t presented an affirmative case. That’s because we haven’t even communicated about the proposition yet.)
Links have high attrition rate, cf ratio of people overcoming a trivial inconvenience. Post your arguments compressed inline to get more eyeballs on them.
The comment I was responding to also didn’t offer serious relevant arguments.
https://tsvibt.blogspot.com/2023/09/a-hermeneutic-net-for-agency.html
And it didn’t label the position it was arguing against as “insane”, so this is also entirely off-topic.
It would be ideal for users to always describe why they have reached the conclusions they have, but that is a fabricated option which does not take into account the basic observation that requiring such explanations creates such a tremendous dis-incentive to commenting that it would drastically reduce the quantity of useful contributions in the community, thus making things worse off than they were before.
So the compromise we reach is one in which users can state their conclusions in a relatively neutral manner that does not poison the discourse that comes afterwards, and then if another user has a question or a disagreement about this matter, later on they can then have a regular, non-Demon Thread discussion about it in which they explain their models and the evidence they had to reach their positions.
I think you are also expressing high emotive confidence in your comments. You are presenting a case, and your expressed confidence slightly lower, but still elevated.
I agree[1], and I think it is entirely appropriate to do so, given that I have given some explanations of the mental models behind my positions on these matters.
For clarity, I’ll summarize my conclusion here, on the basis of what I have explained before (1, 2, 3):
It is fine[2] to label opinions you disagree with as “insane”.
It is fine to give your conclusions without explaining the reasons behind your positions.[3]
It is not fine to do 1 and 2 at the same time.
With regards to your “taboo off-topic” reaction, what I mean by “off-topic” in this case is “irrelevant to the discussion at hand, by focusing on the wrong level of abstraction (meta-level norms vs object-level discourse) and by attempting to say the other person behaved similarly, which is incorrect as a factual matter (see the distinction between points 2 and 3 above), but more importantly, immaterial to the topic at hand even if true”.
I suspect my regular use of italics is part of what is giving off this impression.
Although not ideal in most situations, and should be (lightly) discouraged in most spots.
Although it would be best to be willing to engage in discussion about those reasons later on if other users challenge you on them.
I’m time-bottlenecked now, but I’ll give one example. Consider the Natural Abstraction Hypothesis (NAH) agenda (which, fwiw, I think is an example of considerably-better-than-average work on trying to solve the problem from scratch). I’d argue that even for someone interested in this agenda: 1. most of the relevant work has come (and will keep coming) from outside the LW community (see e.g. The Platonic Representation Hypothesis and compare the literature reviewed there with NAH-related work on LW); 2. (given the previous point) the typical AI safety researcher interested in NAH would do better to spend most of their time (at least at the very beginning) looking at potentially relevant literature outside LW, rather than either trying to start from scratch, or mostly looking at LW literature.
It’s considerably better than average but is a drop in the bucket and is probably mostly wasted motion. And it’s a pretty noncentral example of trying to solve the problem from scratch. I think most people reading this comment just don’t even know what that would look like.
At a glance, this comment seems like it might be part of a pretty strong case that [the concrete ML-related implications of NAH] are much better investigated by the ML community compared to LW alignment people. I doubt that the philosophically more interesting aspects of Wentworth’s perspectives relating to NAH are better served by looking at ML stuff, compared to trying from scratch or looking at Wentworth’s and related LW-ish writing. (I’m unsure about the mathematically interesting aspects; the alternative wouldn’t be in the ML community but would be in the mathematical community.)
And most importantly “someone interested in this agenda” is already a somewhat nonsensical or question-begging conditional. You brought up “AI safety research” specifically, and by that term you are morally obliged to mean [the field of study aimed at figuring out how to make cognitive systems that are more capable than humanity and also serve human value]. That pursuit is better served by trying from scratch. (Yes, I still haven’t presented an affirmative case. That’s because we haven’t even communicated about the proposition yet.)
Links have high attrition rate, cf ratio of people overcoming a trivial inconvenience. Post your arguments compressed inline to get more eyeballs on them.