jchan
I wrote up the following a few weeks ago in a document I shared with our solstice group, which seems to independently parallel G Gordon Worley III’s points:
To- | morrow can be brighter than [1]
to- | day, although the night is cold [2]
the | stars may seem so very far
a- | way… [3]
But | courage, hope and reason burn,
in | every mind, each lesson learned, [4]
[5] | shining light to guide to our way,
[6] | make tomorrow brighter than [7]
to- | day....It’s weird that the comma isn’t here, but rather 1 beat later.
The unnecessary syncopation on “night is cold” is all but guaranteed to throw people off.
If this is supposed to rhyme with “today” from before, it falls flat because “today” is not really at the end of the line, despite the way it’s written.
A rhyme is set up here with “burn”/”learned,” but there is no analogous rhyme in the first stanza.
It really feels like there should be an unstressed pickup syllable here, based on the expectation set by all the previous measures.
Same here.
The stanza should really end here, but it goes on for another measure. (A 9-measure phrase? Who does that?)
To clarify some of these points:
1 & 3: There’s a mismatch between the poetic grouping of words and the rhythmical grouping, which is probably why bgaesop stumbles at that spot. This mismatch is made obvious by writing out the words according to the rhythmical grouping, as above.
2: The “official” version has “night is cold” on a downbeat with the rhythm “16th, 8th, quarter”, which is a very unusual rhythm. Notice that in the live recording here, the group attempts the syncopated rhythm the first time, but stumbles into “the stars may seem...”, and then reverts to the much more natural rhythm “8th, 8th, dotted-8th” in all subsequent iterations.
7: Mozart’s Musical Joke makes fun of bad compositions by starting off with a 7-measure phrase. Phrases are usually in powers or 2 or “nice” composite numbers like 6 or 12; a large prime number like 7 is silly because it can’t be imagined as having any internal regularity. You could maybe get away with 9 if it can be thought of as 3 3-measure subphrases, but this song doesn’t do that.
In my opinion, a good singalong song must have very low or zero tolerance for any irregularities in rhyme or rhythm. In LW jargon, if you think of the song as a stream of data which people are trying to predict in real time, you want them to quickly form an accurate, low-Kolmogorov-complexity model of the whole song based on just a small amount of input at the beginning.
(I’ve always hated singing “the bombs” in the Star-Spangled Banner!)
I think most non-experts still have only a vague understanding of what cryptocurrency actually is, and just mentally lump together all related enterprises into one big category—which is reinforced by the fact that people involved in one kind of business will tend to get involved in others as well. FTX is an exchange, Alameda is a fund, and FTT is a currency, and each of these things could theoretically exist apart from the others, but a layperson will point at all of them and say “FTX” in the same way as one might refer to a PlayStation console as “the Nintendo.”
Legally speaking this is nonsense, but when we’re talking about “social context,” a lack of clarity in the common understanding of what exactly these businesses do might provide an opening for self-deception on the part of the people running them, regarding what illegal activities are “socially acceptable” in their field.
Meta question: What do you think of this style of presenting information? Is it useful?
The more resources people in a community have, the easier it is for them to run events that are free for the participants. The tech community has plenty of money and therefore many tech events are free.
This applies to “top-down funded” events, like a networking thing held at some tech startup’s office, or a bunch of people having their travel expenses paid to attend a conference. There are different considerations with regard to ideological messages conveyed through such events (which I might get into in another post), but this is different from the central example of a “tech/finance/science bubble event” that I’m thinking of, which is “a bunch of people meeting in a cafe/bar/park”.
Or alternatively, do it the way the church does and have no entrance fee and ask for donations during the event.
I would indeed have found this less off-putting, though I’m not sure exactly why.
This is a fair point but I think not the whole story. The events that I’m used to (not just LW and related meetups, but also other things that happen to attract a similar STEM-heavy crowd) are generally held in cafes/bars/parks where nobody has to pay anything to put on the event, so it seems like financial slack isn’t a factor in whether those events happen or not.
Could it be an issue of organizers’ free time? I don’t think it’s particularly time-consuming to run a meetup, especially if you’re not dealing with money and accounting, though I could be wrong.
We might also consider the nature of the activity. One can’t very well meditate in a bar, but parks are still an option, albeit less comfortable than a yoga studio. But isn’t it worth accepting the discomfort for the sake of bringing in more people? Depends on what you’re trying to do, I guess.
Really helpful to hear an on-the-ground perspective!
(I do live in America—Austin specifically.)
I don’t think this issue is specific to spirituality; these are just the most salient examples I can think of where it’s been dealt with for a long time and explicitly discussed in ancient texts. (For a non-spiritual example, according to Wikipedia the Platonic Academy didn’t charge fees either, though I doubt they left any surviving writings explaining why.)
How would you respond to someone who says “I can easily pay the recommended donation of $20 but I don’t think this event/activity is worth nearly as much as you seem to think I should consider it worth, so I’m going to pay only $5 so that it’s still positive-on-net for me to be here”? In other words, pay-what-you-want as opposed to pay-what-you-can.
If I were in your position I’d probably welcome such a person at first, but if they keep coming back while still paying only $5 I might be inclined to think negatively of them, or pressure them to either pay more or leave. Which also seems like a bad thing, so maybe it’s best to collect donations anonymously so that nobody feels pressured.
The problem is that the functions of “doing X” and “convincing people that doing X is worth” are often being served simultaneously by the same activities, and are difficult to disentangle.
You are forced to trust what others tell you.
The difference between fiction and non-fiction is that non-fiction at least purports to be true, while fiction doesn’t. I can decide whether I want to trust what Herodotus says, but it’s meaningless to speak of “trusting” the Sherlock Holmes stories because they don’t make any claims about the world. Imagining that they do is where the fallacy comes in.
For example, kung-fu movies give a misleading impression of how actual fights work, not because the directors are untrustworthy or misinformed, but because it’s more fun than watching realistic fights, and they’re optimizing for that, not for realism.
If you categorically don’t pay people who are a purveyor of values, then you are declaring that you want that nobody is a purveyor of values as their full-time job.
Would this really be a bad thing? The current situation seems like a defect/defect equilibrium—I want there to be full-time advocates for Good Values, but only to counteract all the other full-time advocates for Bad Values. It would be better if we could just agree to ratchet down the ideological arms race so that we can spend our time on more productive, non-zero-sum activities.
But unlike soldiers in a literal arms race, value-purveyors (“preachers” for short) only have what power we give them. A world where full-time preachers are ipso facto regarded as untrustworthy seems more achievable than one in which we all magically agree to dismantle our militaries.
I think there could be a lot of value generated by having more people organize valuable events and take money for them.
Perhaps, but this positive value will be more than counteracted by the negative value generated by Bad-Values-havers also organizing more events.
This intuitively seems true to me, but may not be obvious. It’s based on the assumption that some attributes of an ideology (e.g. the presence of sincere advocates) are relatively more truth-correlated than other attributes (e.g. the profitability of events). Therefore, increasing the weight with which these more-truth-correlated attributes contribute to swaying public opinion, and decreasing the weight of less-truth-correlated attributes, will tend to promote the truth winning out.
(I have more points to add, but I’ll do that in another comment.)
OK, so if I understand this correctly, the proposed method is:
For each question, determine the log score, i.e. the natural logarithm of the probability that was assigned to the outcome that ended up happening.
Find the total score for each contestant.
For each contestant, find e to the power of his/her total score.
Distribute the prize to each contestant in a fraction proportional to that person’s share in the sum of that number across all contestants.
(Edit: I suppose it’s simpler to just multiply all of each contestant’s probabilities together, and distribute the award proportional to that result.)
I have a vague memory of a dream which had a lasting effect on my concept of personal identity. In the dream, there were two characters who each observed the same event from different perspectives, but were not at the time aware of each other’s thoughts. However, when I woke up, I equally remembered “being” each of those characters, even though I also remembered that they were not the same person at the time. This showed me that it’s possible for two separate minds to merge into one, and that personal identity is not transitive.
See also Newcomblike problems are the norm.
When I discuss this with people, the response is often something like: My value system includes a term for people other than myself—indeed, that’s what “morality” is—so it’s redundant / double-counting to posit that I should value others’ well-being also as an acausal “means” to achieving my own ends. However, I get the sense that this disagreement is purely semantic.
Hint:
It’s a character from a movie.
It turns out Japanese words are really useful for filling in crosswords, since they have so many vowels.
Well done! This is faster than I expected it to be solved.
Texas Freeze Retrospective may have some useful info.
If the cryptography example is too distracting, we could instead imagine a non-cryptographic means to the same end, e.g. printing the surveys on leaflets which the employees stuff into envelopes and drop into a raffle tumbler.
The point remains, however, because (just as with the blinded signatures) this method of conducting a survey is very much outside-the-norm, and it would be a drastic world-modeling failure to assume that the HR department actually considered the raffle-tumbler method but decided against it because they secretly do want to deanonymize the surveys. Much more likely is that they simply never considered the option.
But if employees did start adopting the rule “don’t trust the anonymity of surveys that aren’t conducted via raffle tumbler”, even though this is epistemically irrational at first, it would eventually compel HR departments to start using the tumbler method, whereupon the odd surveys that still are being conducted by email will stick out, and it would now be rational to mistrust them. In short, the Adversarial Argument is “irrational” but creates the conditions for its own rationality, which is why I describe it as an “acausal negotiation tactic”.
You mention “Infra-Bayesianism” in that Twitter thread—do you think that’s related to what I’m talking about here?
This is interesting, because it seems that you’ve proved the validity of the “Strong Adversarial Argument”, at least in a situation where we can say:
This event is incompatible with XYZ, since Y should have been called.
In other words, we can use the Adversarial Argument (in a normal Bayesian way, not as an acausal negotiation tactic) when we’re in a setting where the rule against hearsay is enforced. But what reason could we have had for adopting that rule in the first place? It could not have been because of the reasoning you’ve laid out here, which presupposes that the rule is already in force! The rule is epistemically self-fulfilling, but its initial justification would have seemed epistemically “irrational”.
So, why do we apply it in a courtroom setting but not in ordinary conversation? In short, because the stakes are higher and there’s a strong positive incentive to deceive.
To make it slightly more concrete, we could say: one copy is put in a red room, and the other in a green room; but at first the lights are off, so both rooms are pitch black. I wake up in the darkness and ask myself: when I turn on the light, will I see red or green?
There’s something odd about this question. “Standard LessWrong Reductionism” must regard it as meaningless, because otherwise it would be a question about the scenario that remains unanswered even after all physical facts about it are known, thus refuting reductionism. But from the perspective of the test subject, it certainly seems like a real question.
Can we bite this bullet? I think so. The key is the word “I”—when the question is asked, the asker doesn’t know which physical entity “I” refers to, so it’s unsurprising that the question seems open even though all the physical facts are known. By analogy, if you were given detailed physical data of the two moons of Mars, and then you were asked “Which one is Phobos and which one is Deimos?”, you might not know the answer, but not because there’s some mysterious extra-physical fact about them.
So far so good, but now we face an even tougher bullet: If we accept quantum many-worlds and/or modal realism (as many LWers do), then we must accept that all probability questions are of this same kind, because there are versions of me elsewhere in the multiverse that experience all possible outcomes.
Unless we want to throw out the notion of probabilities altogether, we’ll need some way of understanding self-location problems besides dismissing them as meaningless. But I think the key is in recognizing that probability is ultimately in the map, not the territory, however real it may seem to us—i.e. it is a tool for a rational agent to achieve its goals, and nothing more.
What exactly did you do with the candles? I’ve seen pictures and read posts mentioning the fact that candles are used at solstice events, but I’m having trouble imagining how it works without being logistically awkward. E.g.:
Where are the candles stored before they’re passed out to the audience?
At what point are the candles passed out? Do people get up from their seats, go get a candle, and then return to their seats, or do you pass around a basket full of candles?
When are the candles initially lit? Before or after they’re distributed?
When are the candles extinguished during the “darkening” phase? How does each person know when to extinguish their own candle?
Is there a point later when people can ditch their candles? Otherwise, it must be annoying to have to hold a lit candle throughout the whole “brightening” phase.
What happens to the candles at the end?