Why does it having a productivity problem mean it doesn’t have a housing problem? Seems like you want to say housing will not fix its productivity problem? (And that is a bigger problem, thus housing is not the biggest?)
KatjaGrace
Fair! I interpret them as probably happy free-range sheep being raised for wool, an existence I’m happy about and in particular prefer to vegetablehood, but a) that seems uncertain, and b) ymmv regarding the value of unfree sheep lives being used as a means to an end etc.
The seals share the reference class “seals” but are different, notably one is way bigger than the others. So if you wanted to predict something about the big seal, there is a discussion to be had about what to make of the seal reference class, or other possible reference classes e.g. “things that weigh half a ton”
Assuming your preferences don’t involve other people or the world
Not sure about this, but to the extent it was so, often they were right that a lot of things they liked would be gone soon, and that that was sad. (Not necessarily on net, though maybe even on net for them and people like them.)
Seems like there are a lot of possibilities, some of them good, and I have little time to think about them. It just feels like a red flag for everything in your life to be swapped for other things by very powerful processes beyond your control while you are focused on not dying. Like, if lesser changes were upcoming in people’s lives such that they landed in near mode, I think they would be way less sanguine—e.g. being forced to move to New York City.
Do you mean that the half-day projects have to be in sequence relative to the other half-day projects, or within a particular half-day project, its contents have to be in sequence (so you can’t for instance miss the first step then give up and skip to the second step)?
In general if things have to be done in sequence, often I make the tasks non-specific, e.g. lets say i want to read a set of chapters in order, then i might make the tasks ‘read a chapter’ rather than ‘read the first chapter’ etc. Then if I were to fail at the first one, I would keep reading the first chapter to grab the second item, then when I eventually rescued what would have been the first chapter, I would collect it by reading whatever chapter I was up to. (This is all hypothetical—I never read chapters that fast.)
Second sentence:
People say very different things depending on framing, so responses to any particularly-framed question are presumably not accurate, though I’d still take them as some evidence.
People say very different things from one another, so any particular person is highly unlikely to be accurate. An aggregate might still be good, but e.g. if people say such different things that three-quarters of them have to be totally wrong, then I don’t think it’s that much more likely that the last quarter is about right than that the answer is something almost nobody said.
First sentence:
In spite of the above, and the prior low probability of this being a reliable guide to AGI timelines, our paper was the 16th most discussed paper in the world. On the other hand, something like Ajeya’s timelines report (or even AI Impacts’ cruder timelines botec earlier) seem more informative, and to get way less attention. (I didn’t mean ‘within the class of surveys, interest doesn’t track informativeness much’ though that might be true, I meant ‘people seem to have substantial interest in surveys beyond what is explained by them being informative about e.g. AI timelines’
)
We didn’t do rounding though, right? Like, these people actually said 0?
Gazelle ultimate T10+ 46inch
Not quite sure what you mean, but all data is linked at the end of https://www.lesswrong.com/posts/3Rtvo6qhFde6TnDng/positly-covid-survey-2-controlled-productivity-data
n probably too small to read much into it, but yes: https://www.lesswrong.com/posts/3Rtvo6qhFde6TnDng/positly-covid-survey-2-controlled-productivity-data
I did ask about it, data here (note that n is small): https://www.lesswrong.com/posts/iTH6gizyXFxxthkDa/positly-covid-survey-long-covid
Yeah, I meant that early on in the vaccinations, officialish-seeming articles said or implied that breakthrough cases were very rare (even calling them ‘breakthrough cases’, to my ear, sounds like they are sort of more unexpected than they should be, but perhaps that’s just what such things are always called). That seemed false at the time even, before later iterations of covid made it more blatantly so. I think it was probably motivated partly by desire to convince people that the vaccine was very good, rather than just error, which I think is questionable behavior.
I agree that I’m more likely to be concerned about in-fact-psychosomatic things than average, and on the outside view, thus probably biased in that direction in interpreting evidence. Sorry if that colors the set of considerations that seem interesting to me. (I didn’t mean to claim that this was an unbiased list, sorry if I implied it. )
Some points regarding the object level:
The scenario I described was to illustrate a logical point (that the initially tempting inference from that study wasn’t valid). So I wouldn’t want to take the numbers from that hypothetical scenario and apply them across the board to interpreting other data. I haven’t thought through what range of possible numbers is really implied, or whether there are other ways to make sense of these prima facie weird findings (especially re lack of connection between having covid and thinking you have covid). If I put a lot of stock in that study, I agree there is some adjustment to be made to other numbers (and probably anyway—surely some amount of misattribution is going on, and even some amount of psychosomatic illness).
My description was actually of how you would get those results if approximately none of the illness was psychosomatic but a lot of it was other illnesses (the description would work with psychosomatic illnesses too, but I worry that you misread my point, since you are saying that in that world most things are psychosomatic, and my point was that you can’t infer that anything was psychosomatic).
If the scenario I described was correct, the rates of misattribution implied would be specific to that population and their total ignorance about whether they had covid, rather than a fact intrinsic to covid in general, and applicable to all times and places. I do find it very hard to believe that in general there is not some decently strong association between having covid and thinking you have covid, even if also a lot of errors.
It’s a single study, and single studies find all kinds of things. I don’t recall seeing other evidence supporting it. In such a case, I’m inclined to treat it as worthy of adding some uncertainty, but not worthy of a huge update about everything.
If this consideration reduced real long covid cases by a factor of two, it doesn’t feel like that changes the story very much (there’s a lot of factor-of-two-level uncertainty all over the place, especially in guessing what the rate is for a specific demographic), so I guess it doesn’t seem cruxy enough to give a lot of attention to.
I agree that mostly it isn’t salient to me that some fraction of cases are misattributions, and that maybe I should keep it in mind more, and say things like ‘it looks like many people who think they had covid can no longer do their jobs’ instead of taking things at face value. Though in my defense, this was a list of considerations, so I’m also not flagging all of the other corrections one might want to make to numbers throughout, as I might if I were doing a careful calculation.
It’s true that I don’t really believe that half of the bad cases at least are misattributions or psychosomatic—the psychosomatic story seems particularly far-fetched (particularly for the bad cases). Perhaps I’m mis-imagining what this would look like. Is there other evidence for this that you are moved by?
I thought rapid tests were generally considered to have a much lower false negative rate for detecting contagiousness, though they often miss people who are infected but not yet contagious. I forget why I think this, and haven’t been following possible updates on this story, but is that different from your impression? (Here’s one place I think saying this, for instance: https://www.rapidtests.org/blog/antigen-tests-as-contagiousness-tests) On this story, rapid tests immediately before an event would reduce overall risk by a lot.
Agree the difference between actors and real companions is very important! I think you misread me (see response to AllAmericanBreakfast’s above comment.)
Your current model appears to be wrong (supposing people should respond to fire alarms quickly).
From the paper:
”Subjects in the three naive bystander condition were markedly inhibited from reporting the smoke. Since 75% of the alone subjects reported the smoke, we would expect over 98% of the three-person groups to contain at least one reporter. In fact, in only 38% of the eight groups in this condition did even 1 subject report (p < .01). Of the 24 people run in these eight groups, only 1 person reported the smoke within the first 4 minutes before the room got noticeably unpleasant. Only 3 people reported the smoke within the entire experimental period.”
Fig 1 in the paper looks at a glance to imply also that the solitary people all reported it before 4 minutes.
Sorry for being unclear. The first video shows a rerun of the original experiment, which I think is interesting because it is nice to actually see how people behave, though it is missing footage of the (I agree crucial) three group case. The original experiment itself definitely included groups of entirely innocent participants, and I agree that if it didn’t it wouldn’t be very interesting. (According to the researcher in the footage, via private conversation, he recalls that the filmed rerun also included at least one trial with all innocent people, but it was a while ago, so he didn’t sound confident. See footnote there.)
It still looks to me like this is what I say, but perhaps I could signpost more clearly that the video is different from the proper experiment?
I think I would have agreed that answering honestly is a social gaffe a few years ago, and in my even younger years I found it embarrassing to ask such things when we both knew I wasn’t trying to learn the answer, but now I feel like it’s very natural to elaborate a bit, and it usually doesn’t feel like an error. e.g. ‘Alright—somewhat regretting signing up for this thing, but it’s reminding me that I’m interested in the topic’ or ‘eh, seen better days, but making crepes—want one?’ I wonder if I’ve become oblivious in my old age, or socially chill, or the context has changed. It could be partly that this depends on how well the conversationalists know each other, and it has been a slow year and a half for seeing people I don’t live with.
I’d say my identity is another condition that separates me from the worms, but you are right it is a special one, and perhaps ‘unconditionally’ means ‘only on condition of your identity’.