jchan
Through a panel, darkly: a case study in internet BS detection
One time, a bunch of particularly indecisive friends had started an email thread in order to arrange a get-together. Several of them proposed various times/locations but nobody expressed any preferences among them. With the date drawing near, I broke the deadlock by saying something like “I have consulted the omens and determined that X is the most auspicious time/place for us to meet.” (I hope they understood I was joking!) I have also used coin-flips or the hash of an upcoming Bitcoin block for similar purposes.
I think the sociological dynamic is something like: Nobody really cares what we coordinate on, but they do care about (a) not wanting to be seen as unjustifiably grabbing social status by imposing a single choice on everyone else, and (b) not wanting to accept lower status by going along with someone else’s preference. So, to coordinate, we defer the choice to some “objective” external process, so that nobody’s social status is altered by it.
An example where this didn’t work: The Gregorian calendar took centuries to be adopted throughout Europe, despite being justified by “objective” astronomical data, because non-Catholic countries thought of it as a “papal imposition” whose acceptance would imply acceptance of the Pope’s authority over the whole Christian church. (Much better to stick with Julius Caesar’s calendar instead!)
This may shed some light onto why people have fun playing the Schelling game. It’s always amusing when I discover how uncannily others’ thoughts match my own, e.g. when I think to myself “X! No, X is too obscure, I should probably say the more common answer Y instead”, and then it turns out X is the majority answer after all.
Solstice song: Here Lies the Dragon
Thanks everyone for coming! Feedback survey here: https://forms.gle/Nx4vqmXZnJ8EuuKP9
What exactly did you do with the candles? I’ve seen pictures and read posts mentioning the fact that candles are used at solstice events, but I’m having trouble imagining how it works without being logistically awkward. E.g.:
Where are the candles stored before they’re passed out to the audience?
At what point are the candles passed out? Do people get up from their seats, go get a candle, and then return to their seats, or do you pass around a basket full of candles?
When are the candles initially lit? Before or after they’re distributed?
When are the candles extinguished during the “darkening” phase? How does each person know when to extinguish their own candle?
Is there a point later when people can ditch their candles? Otherwise, it must be annoying to have to hold a lit candle throughout the whole “brightening” phase.
What happens to the candles at the end?
I wrote up the following a few weeks ago in a document I shared with our solstice group, which seems to independently parallel G Gordon Worley III’s points:
To- | morrow can be brighter than [1]
to- | day, although the night is cold [2]
the | stars may seem so very far
a- | way… [3]
But | courage, hope and reason burn,
in | every mind, each lesson learned, [4]
[5] | shining light to guide to our way,
[6] | make tomorrow brighter than [7]
to- | day....It’s weird that the comma isn’t here, but rather 1 beat later.
The unnecessary syncopation on “night is cold” is all but guaranteed to throw people off.
If this is supposed to rhyme with “today” from before, it falls flat because “today” is not really at the end of the line, despite the way it’s written.
A rhyme is set up here with “burn”/”learned,” but there is no analogous rhyme in the first stanza.
It really feels like there should be an unstressed pickup syllable here, based on the expectation set by all the previous measures.
Same here.
The stanza should really end here, but it goes on for another measure. (A 9-measure phrase? Who does that?)
To clarify some of these points:
1 & 3: There’s a mismatch between the poetic grouping of words and the rhythmical grouping, which is probably why bgaesop stumbles at that spot. This mismatch is made obvious by writing out the words according to the rhythmical grouping, as above.
2: The “official” version has “night is cold” on a downbeat with the rhythm “16th, 8th, quarter”, which is a very unusual rhythm. Notice that in the live recording here, the group attempts the syncopated rhythm the first time, but stumbles into “the stars may seem...”, and then reverts to the much more natural rhythm “8th, 8th, dotted-8th” in all subsequent iterations.
7: Mozart’s Musical Joke makes fun of bad compositions by starting off with a 7-measure phrase. Phrases are usually in powers or 2 or “nice” composite numbers like 6 or 12; a large prime number like 7 is silly because it can’t be imagined as having any internal regularity. You could maybe get away with 9 if it can be thought of as 3 3-measure subphrases, but this song doesn’t do that.
In my opinion, a good singalong song must have very low or zero tolerance for any irregularities in rhyme or rhythm. In LW jargon, if you think of the song as a stream of data which people are trying to predict in real time, you want them to quickly form an accurate, low-Kolmogorov-complexity model of the whole song based on just a small amount of input at the beginning.
(I’ve always hated singing “the bombs” in the Star-Spangled Banner!)
I think most non-experts still have only a vague understanding of what cryptocurrency actually is, and just mentally lump together all related enterprises into one big category—which is reinforced by the fact that people involved in one kind of business will tend to get involved in others as well. FTX is an exchange, Alameda is a fund, and FTT is a currency, and each of these things could theoretically exist apart from the others, but a layperson will point at all of them and say “FTX” in the same way as one might refer to a PlayStation console as “the Nintendo.”
Legally speaking this is nonsense, but when we’re talking about “social context,” a lack of clarity in the common understanding of what exactly these businesses do might provide an opening for self-deception on the part of the people running them, regarding what illegal activities are “socially acceptable” in their field.
Meta question: What do you think of this style of presenting information? Is it useful?
Austin LW meetup notes: The FTX Affair
The more resources people in a community have, the easier it is for them to run events that are free for the participants. The tech community has plenty of money and therefore many tech events are free.
This applies to “top-down funded” events, like a networking thing held at some tech startup’s office, or a bunch of people having their travel expenses paid to attend a conference. There are different considerations with regard to ideological messages conveyed through such events (which I might get into in another post), but this is different from the central example of a “tech/finance/science bubble event” that I’m thinking of, which is “a bunch of people meeting in a cafe/bar/park”.
Or alternatively, do it the way the church does and have no entrance fee and ask for donations during the event.
I would indeed have found this less off-putting, though I’m not sure exactly why.
This is a fair point but I think not the whole story. The events that I’m used to (not just LW and related meetups, but also other things that happen to attract a similar STEM-heavy crowd) are generally held in cafes/bars/parks where nobody has to pay anything to put on the event, so it seems like financial slack isn’t a factor in whether those events happen or not.
Could it be an issue of organizers’ free time? I don’t think it’s particularly time-consuming to run a meetup, especially if you’re not dealing with money and accounting, though I could be wrong.
We might also consider the nature of the activity. One can’t very well meditate in a bar, but parks are still an option, albeit less comfortable than a yoga studio. But isn’t it worth accepting the discomfort for the sake of bringing in more people? Depends on what you’re trying to do, I guess.
Really helpful to hear an on-the-ground perspective!
(I do live in America—Austin specifically.)
I don’t think this issue is specific to spirituality; these are just the most salient examples I can think of where it’s been dealt with for a long time and explicitly discussed in ancient texts. (For a non-spiritual example, according to Wikipedia the Platonic Academy didn’t charge fees either, though I doubt they left any surviving writings explaining why.)
How would you respond to someone who says “I can easily pay the recommended donation of $20 but I don’t think this event/activity is worth nearly as much as you seem to think I should consider it worth, so I’m going to pay only $5 so that it’s still positive-on-net for me to be here”? In other words, pay-what-you-want as opposed to pay-what-you-can.
If I were in your position I’d probably welcome such a person at first, but if they keep coming back while still paying only $5 I might be inclined to think negatively of them, or pressure them to either pay more or leave. Which also seems like a bad thing, so maybe it’s best to collect donations anonymously so that nobody feels pressured.
The problem is that the functions of “doing X” and “convincing people that doing X is worth” are often being served simultaneously by the same activities, and are difficult to disentangle.
You are forced to trust what others tell you.
The difference between fiction and non-fiction is that non-fiction at least purports to be true, while fiction doesn’t. I can decide whether I want to trust what Herodotus says, but it’s meaningless to speak of “trusting” the Sherlock Holmes stories because they don’t make any claims about the world. Imagining that they do is where the fallacy comes in.
For example, kung-fu movies give a misleading impression of how actual fights work, not because the directors are untrustworthy or misinformed, but because it’s more fun than watching realistic fights, and they’re optimizing for that, not for realism.
If you categorically don’t pay people who are a purveyor of values, then you are declaring that you want that nobody is a purveyor of values as their full-time job.
Would this really be a bad thing? The current situation seems like a defect/defect equilibrium—I want there to be full-time advocates for Good Values, but only to counteract all the other full-time advocates for Bad Values. It would be better if we could just agree to ratchet down the ideological arms race so that we can spend our time on more productive, non-zero-sum activities.
But unlike soldiers in a literal arms race, value-purveyors (“preachers” for short) only have what power we give them. A world where full-time preachers are ipso facto regarded as untrustworthy seems more achievable than one in which we all magically agree to dismantle our militaries.
I think there could be a lot of value generated by having more people organize valuable events and take money for them.
Perhaps, but this positive value will be more than counteracted by the negative value generated by Bad-Values-havers also organizing more events.
This intuitively seems true to me, but may not be obvious. It’s based on the assumption that some attributes of an ideology (e.g. the presence of sincere advocates) are relatively more truth-correlated than other attributes (e.g. the profitability of events). Therefore, increasing the weight with which these more-truth-correlated attributes contribute to swaying public opinion, and decreasing the weight of less-truth-correlated attributes, will tend to promote the truth winning out.
(I have more points to add, but I’ll do that in another comment.)
Charging for the Dharma
OK, so if I understand this correctly, the proposed method is:
For each question, determine the log score, i.e. the natural logarithm of the probability that was assigned to the outcome that ended up happening.
Find the total score for each contestant.
For each contestant, find e to the power of his/her total score.
Distribute the prize to each contestant in a fraction proportional to that person’s share in the sum of that number across all contestants.
(Edit: I suppose it’s simpler to just multiply all of each contestant’s probabilities together, and distribute the award proportional to that result.)
I have a vague memory of a dream which had a lasting effect on my concept of personal identity. In the dream, there were two characters who each observed the same event from different perspectives, but were not at the time aware of each other’s thoughts. However, when I woke up, I equally remembered “being” each of those characters, even though I also remembered that they were not the same person at the time. This showed me that it’s possible for two separate minds to merge into one, and that personal identity is not transitive.
See also Newcomblike problems are the norm.
When I discuss this with people, the response is often something like: My value system includes a term for people other than myself—indeed, that’s what “morality” is—so it’s redundant / double-counting to posit that I should value others’ well-being also as an acausal “means” to achieving my own ends. However, I get the sense that this disagreement is purely semantic.
I’m unsure whether it’s a good thing that LLaMA exists in the first place, but given that it does, it’s probably better that it leak than that it remain private.
What are the possible bad consequences of inventing LLaMA-level LLMs? I can think of three. However, #1 and #2 are of a peculiar kind where the downsides are actually mitigated rather than worsened by greater proliferation. I don’t think #3 is a big concern at the moment, but this may change as LLM capabilities improve (and please correct me if I’m wrong in my impression of current capabilities).
Economic disruption: LLMs may lead to unemployment because it’s cheaper to use one than to hire a human to do the same work. However, given that they already exist, it’s only a question of whether the economic gains accrue to a few large corporations or to a wider mass of people. If you think economic inequality is bad (whether per se or due to its consequences), then you’ll think the LLaMA leak is a good thing.
Informational chaos: You can never know whether product reviews, political opinions, etc. are actually genuine expressions of what some human being thinks rather than AI-generated fluff created by actors with an interest in deceiving you. This was already a problem (i.e. paid shills), but with LLMs it’s much easier to generate disinformation at scale. However, this problem “solves itself” once LLMs are so easily accessible that everyone knows not to trust anything they read anyway. (By contrast, if LLMs are kept private, AI-generated content seems more trustworthy because it comes in a wider context where most content is still human-authored.)
Infohazard production: If e.g. there’s some way of building a devastating bioweapon using household materials, then it’d be really bad if LLaMA made this knowledge more accessible, or could discover it anew. However, I haven’t seen any evidence that LLaMA is capable of discovering new scientific knowledge that’s not in the training set, or that querying it to surface existing such knowledge is any more effective than using a regular search engine. But this may change with more advanced models.