This week’s topic is Reading & Discussion.
should be changed
This week’s topic is Reading & Discussion.
should be changed
My guess at the truth of the matter is that almost no one is 100% guessing, but some people are extremely confident in their answer (a lot of the correct folks and also a small number of die-hard geocentrists), and then there’s a range down to people who haven’t thought about it in ages and just have a vague recollection of some elementary school teacher. Which I think is also a more hopeful picture than either the 36% clueless or the 18% geocentrists models? Because for people who are right but not confident, I’m reasonably ok with that; ideally they’d “know” more strongly, but it’s not a disaster if they don’t. And for people who are wrong but not confident, there are not that many of them and also they would happily change their mind if you just told them the correct answer.
How valid is it to assume that (approximately) everyone who got the heliocentrism question wrong got it wrong by “guessing”? If 18% got it wrong, then your model says that there’s 36% who had no clue and half guessed right, but at the other extreme there’s a model that everyone ‘knows’ the answer, but 18% ‘know’ the wrong answer. I’m not sure which is scarier − 36% clueless or 18% die-hard geocentrists—but I don’t think we have enough information here to tell where on that spectrum it is. (In particular, if “I don’t know” was an option and only 3% selected it, then I think this is some evidence against the extreme end of 36% clueless?)
Too Like the Lightning by Ada Palmer :)
Here’s the closest thing to your argument in this post that I’d endorse:
Ukraine did not allow its men to leave.
NYT did not mention this fact as often as a “fully unbiased” paper would have, and in fact often used wordings that were deliberately deceptive in that they’d cause a reader to assume that men were staying behind voluntarily.
Therefore NYT is not a fully unbiased paper.
The part I disagree with: I think this is drastically blown out of proportion (e.g. that this represents “extreme subversions of democracy”). Yes NYT (et al) is biased, but
I think this has been true for much longer than just the Ukraine war
I think there are much stronger pieces of evidence one could use to demonstrate it than this stuff about the Ukraine war (e.g. https://twitter.com/KelseyTuoc/status/1588231892792328192)
I think that yes all this is bad, but democracy is managing to putter along anyway
Which means that in my eyes, the issue with your post is precisely the degree to which it is exaggerating the problem. Which is why I (and perhaps other commenters) focused our comments on your exaggerations, such as the “routinely and brazenly lied” title. So these comments seem quite sane to me; I think you’re drawing entirely the wrong lesson from all this if you think the issue is that you drew the wrong people here with your tagging choices (you’d have drawn me no matter what with your big-if-true title), or didn’t post large enough excerpts from the articles. But if you plan to firm up the “implications for geopolitics and the survival of democracy”, I look forward to reading that.
The title of this post is a “level 6 lie” too. I, and I’d guess many if not most of your readers, came here expecting to read some type 7 once we saw “routinely and brazenly lied”. Which means you built a false model in our heads, even if you can claim you are technically accurate because Scott Alexander once wrote a thing where NYT’s behavior is type 6. Plus I will note that his description of type 6 calls it not technically lying, which rather weakens your claim to be even technically correct.
with the following assumptions:
Should the ∨ in assumption 1 be an ∧?
Cool idea!
One note about this:
Let’s see what happens if I tweak the language: … Neat! It’s picked up on a lot of nuance implied by saying “important” rather than “matters”.
Don’t forget that people trying to extrapolate from your five words have not seen any alternate wordings you were considering. The LLM could more easily pick up on the nuance there because it was shown both wordings and asked to contrast them. So if you actually want to use this technique to figure out what someone will take away from your five words, maybe ask the LLM about each possible wording in a separate sandbox rather than a single conversation.
US Department of Transportation, as I’m sometimes bold enough to call them
I assume you intended to introduce your “US DoT” abbreviation here?
Oh, and as an aside a practical experiment I ran back in the day by accident: I played in a series of Diplomacy games where there was common knowledge that if I ever broke my word on anything all the other players would gang up on me, and I still won or was in a 2-way draw (out of 6-7 players) most of the time. If you have a sufficient tactical and strategic advantage (aka are sufficiently in-context smarter) then a lie detector won’t stop you.
I’m not sure this is evidence for what you’re using it for? Giving up the ability to lie is a disadvantage, but you did get in exchange the ability to be trusted, which is a possibly-larger advantage—there are moves which are powerful but leave you open to backstabbing; other alliances can’t take those moves and yours can.
Taken together, the two linked markets say there’s a significant chance that the House does absolutely nothing for multiple weeks (i.e. they don’t elect a new speaker and they don’t conduct legislative business either). I guess this is possible but I don’t think we’re that dysfunctional and will bet against that result when my next Manifold loan comes in.
I haven’t dived into the formalism (if there is one?), but I’m roughly using FDT to mean “make your decision with the understanding that you are deciding at the policy level, so this affects not just the current decision but all other decisions that fall under this policy that will be made by you or anything sufficiently like you, as well as all decisions made by anyone else who can discern (and cares about) your policy”. Which sounds complicated, but I think often really isn’t? e.g. in the habits example, it makes everything very simple (do the habit today because otherwise you won’t do it tomorrow either). CDT can get to the same result there—unlike for some weirder examples, there is a causal though not well-understood pathway between your decision today and the prospective cost you will face when making the decision tomorrow, so you could hack that into your calculations. But if by ‘overkill’ you mean using something more complicated than necessary, then I’d say that it’s CDT that would be overkill, not FDT, since FDT can get to the result more simply. And if by ‘overkill’ you mean using something more powerful/awesome/etc than necessary, then overkill is the best kind of kill :)
I think FDT is more practical as a decision theory for humans than you give it credit for. It’s true there are a lot of weird and uncompelling examples floating around, but how about this very practical one: the power of habits. There’s common and (I think) valuable wisdom that when you’re deciding whether to e.g. exercise today or not (assuming that’s something you don’t want to do in the moment but believe has long-term benefits), you can’t just consider the direct costs and benefits of today’s exercise session. Instead, you also need to consider that if you don’t do it today, realistically you aren’t going to do it tomorrow either because you are a creature of habit. In other words, the correct way to think about habit-driven behavior (which is a lot of human behavior) is FDT: you don’t ask “do I want to skip my exercise today” (to which the answer might be yes), instead you ask “do I want to be the kind of person who skips their exercise today” (to which the answer is no, because that kind of person also skips it every day).
Decision theories are not about what kind of agent you want to be. There is no one on god’s green earth who disputes that the types of agents who one box are better off on average. Decision theory is about providing a theory of what is rational.
Taboo the word “rational”—what real-world effects are you are actually trying to accomplish? Because if FDT makes me better off on average and CDT allows me to write “look how rational I am” in my diary, then there’s a clear winner here.
I suggest Codenames as a good game for meetups:
rounds are short (as is teaching the game)
it can take a flexible number of players, and people can easily drop in and out
it is good both for people who want a brain burner (play as captain) and for people who want something casual (don’t play as captain)
if you play without the timer, there will be dead time between turns for people to talk while the captains come up with clues
as GuySrinivasan comments, there are some people who don’t like board games, but anecdotally I’d estimate 90% of the people I’ve introduced to Codenames have enjoyed it, including multiple people who’ve said they don’t normally play/enjoy games.
committing to an earning-to-give path would be a bet on this situation being the new normal.
Is that true, or would it just be the much more reasonable bet that this situation ever occurs again? because at least in theory, a dedicated earning-to-give person could just invest money during the funding-flush times and then donate it the next time we’re funding-constrained?
Oh I entirely agree.
My guess is that a lot of the difference in perception-of-danger comes from how much control people feel they have in each situation. In a car I feel like I am in control, so as long as I don’t do stupid stuff I won’t get in an accident (fatal or otherwise), even though this is obviously not true as a random drunk driver could always hit me. Whereas on transit I feel less in control and have had multiple brushes with people who were obviously not fully in their right minds, one of whom claimed to have a gun; I may not have actually been in more danger but it sure felt like it.
Focusing on only deaths makes some sense since it’s the largest likely harm, but I will note that death is not the only outcome some people are afraid of on public transit; if you include lesser harms like being mugged or groped then that is going to tip the scales further towards driving since those things have a ~0% chance of happening while driving, and a small-but-probably-non-trivial chance (I haven’t looked it up) on transit.
Good news: there is no way to opt in because you are already in. (If you want to opt out, we have a problem.)