I enjoyed this post, both for its satire of a bunch of peoples’ thinking styles (including mine, at times), and because IMO (and in the author’s opinion, I think), there are some valid points near here and it’s a bit tricky to know which parts of the “jokes/poetry” may have valid analogs.
I appreciate the author for writing it, because IMO we have a whole bunch of different subcultures and styles of conversation and sets of assumptions colliding all of a sudden on the internet right now around AI risk, and noticing the existence of the others seems useful, and IMO the OP is an attempt to collide LW with some other styles. Judging from the comments it seems to me not to have succeeded all that much; but it was helpful to me, and I appreciate the effort. (Though, as a tactical note, it seems to me the approximate failure was due mostly to piece’s the sarcasm, and I suspect sarcasm in general tends not to work well across cultural or inferential distances.)
Some points I consider valid, that also appear within [the vibes-based reasoning the OP is trying to satirize, and also to model and engage with]:
1) Sometimes, talking a lot about a very specific fear can bring about the feared scenario. (An example I’m sure of: a friend’s toddler stuck her hands in soap. My friend said “don’t touch your eyes.” The toddler, unclear on the word ‘not,’ touched her eyes.) (A possible example I’m less confident in: articulated fears of AI risk may have accelerated AI because humanity’s collective attentional flows, like toddlers, has no reasonable implementation of the word “not.”) This may be a thing to watch out for for an AI risk movement.
(I think this is non-randomly reflected in statements like: “worrying has bad vibes.”)
2) There’s a lot of funny ways that attempting to control people or social processes can backfire. (Example: lots of people don’t like it when they feel like something is trying to control them.) (Example: the prohibition of alcohol in the US between 1917-1933 is said to have fueled organized crime.) (Example I’m less confident of: Trying to keep e.g. anti-vax views out of public discourse leads some to be paranoid, untrusting of establishment writing on the subject.)This is a thing that may make trouble for some safety strategies, and that seems to me to be non-randomly reflected in “trying to control things has bad vibes.”
(Though, all things considered, I still favor trying to slow things! And I care about trying to slow things.)
3) There’re a lot of places where different schelling equilibria are available, and where groups can, should, and do try to pick the equilibrium that is better. In many cases this is done with vibes. Vibes, positivity, attending to what is or isn’t cool or authentic (vs boring), etc., are part of how people decide which company to congregate on, which subculture to bring to life, which approach to AI to do research within, etc. -- and this is partly doing some real work discerning what can become intellectually vibrant (vs boring, lifeless, dissociated).
TBC, I would not want to use vibes-based reasoning in place of reasoning, and I would not want LW to accept vibes in place of reasons. I would want some/many in LW to learn to model vibes-based reasoning for the sake of understanding the social processes around us. I would also want some/many at LW to sometimes, if the rate of results pans out in a given domain, use something like vibes-based reasoning as a source of hypotheses that one can check against actual reasoning. LW seems to me pretty solid on reasoning relative to other places I know on the internet, but only mediocre on generativity; I think learning to absorb hypotheses from varied subcultures (and from varied old books, from people who thought at other times and places) would probably help, and the OP is gesturing at one such subculture.
I’m posting this comment because I didn’t want to post this comment for fear of being written off by LW, and I’m trying to come out of more closets. Kinda at random, since I’ve spent large months or small years failing to successfully implement some sort of more planned approach.
I enjoyed this post, both for its satire of a bunch of peoples’ thinking styles (including mine, at times), and because IMO (and in the author’s opinion, I think), there are some valid points near here and it’s a bit tricky to know which parts of the “jokes/poetry” may have valid analogs.
I appreciate the author for writing it, because IMO we have a whole bunch of different subcultures and styles of conversation and sets of assumptions colliding all of a sudden on the internet right now around AI risk, and noticing the existence of the others seems useful, and IMO the OP is an attempt to collide LW with some other styles. Judging from the comments it seems to me not to have succeeded all that much; but it was helpful to me, and I appreciate the effort. (Though, as a tactical note, it seems to me the approximate failure was due mostly to piece’s the sarcasm, and I suspect sarcasm in general tends not to work well across cultural or inferential distances.)
Some points I consider valid, that also appear within [the vibes-based reasoning the OP is trying to satirize, and also to model and engage with]:
1) Sometimes, talking a lot about a very specific fear can bring about the feared scenario. (An example I’m sure of: a friend’s toddler stuck her hands in soap. My friend said “don’t touch your eyes.” The toddler, unclear on the word ‘not,’ touched her eyes.) (A possible example I’m less confident in: articulated fears of AI risk may have accelerated AI because humanity’s collective attentional flows, like toddlers, has no reasonable implementation of the word “not.”) This may be a thing to watch out for for an AI risk movement.
(I think this is non-randomly reflected in statements like: “worrying has bad vibes.”)
2) There’s a lot of funny ways that attempting to control people or social processes can backfire. (Example: lots of people don’t like it when they feel like something is trying to control them.) (Example: the prohibition of alcohol in the US between 1917-1933 is said to have fueled organized crime.) (Example I’m less confident of: Trying to keep e.g. anti-vax views out of public discourse leads some to be paranoid, untrusting of establishment writing on the subject.)This is a thing that may make trouble for some safety strategies, and that seems to me to be non-randomly reflected in “trying to control things has bad vibes.”
(Though, all things considered, I still favor trying to slow things! And I care about trying to slow things.)
3) There’re a lot of places where different schelling equilibria are available, and where groups can, should, and do try to pick the equilibrium that is better. In many cases this is done with vibes. Vibes, positivity, attending to what is or isn’t cool or authentic (vs boring), etc., are part of how people decide which company to congregate on, which subculture to bring to life, which approach to AI to do research within, etc. -- and this is partly doing some real work discerning what can become intellectually vibrant (vs boring, lifeless, dissociated).
TBC, I would not want to use vibes-based reasoning in place of reasoning, and I would not want LW to accept vibes in place of reasons. I would want some/many in LW to learn to model vibes-based reasoning for the sake of understanding the social processes around us. I would also want some/many at LW to sometimes, if the rate of results pans out in a given domain, use something like vibes-based reasoning as a source of hypotheses that one can check against actual reasoning. LW seems to me pretty solid on reasoning relative to other places I know on the internet, but only mediocre on generativity; I think learning to absorb hypotheses from varied subcultures (and from varied old books, from people who thought at other times and places) would probably help, and the OP is gesturing at one such subculture.
I’m posting this comment because I didn’t want to post this comment for fear of being written off by LW, and I’m trying to come out of more closets. Kinda at random, since I’ve spent large months or small years failing to successfully implement some sort of more planned approach.