oh ok you said “has obvious adhd” like you’re inferring it from a few minutes observation of her behavior, not that she told you she has adhd. in general no you can’t get an accurate diagnosis by observing someone, you need to differential diagnosis hypomania, hyperthyroidism, autism, substance abuse, caffeine, sleep deprivation, or just enjoying her hobby, plus establish whatever behavior is adhdlike happens across a variety of domains going back some time.
danielechlin
What’s the word for the amount of expertise that I, an experienced therapy patient and generally educated person, have on psychology topics?
Adhd is not legible just by being in the same room as someone.
Furthermore it’s pretty basic flaws by LW standards, like the “map/territory” which is the first post in the first sequence. I don’t think “discussing basic stuff” is wrong by itself, but doing so by shuttling in someone else’s post is sketch, and when that post is also some sort of polemical countered by the first post in the first sequence on LW it starts getting actively annoying.
convenient fiction aka a model. Like they almost get this, they just think pointing it out should be done in a really polemical strawmanny “scientists worship their god-models” way.
It’s telling they manage to avoid using the word “risk” or “risk-averse” because that’s the most obvious example of a time when an economist would realize a simpler form of utility, money, isn’t the best model for individual decisions. This isn’t a forgivable error when you’re convinced you have a more lucid understanding of the model/metaphor status of a science concept than scientists who use it, and it’s accessible in econ 101 or even just to common sense.
More specifically, the correctness of the proof (at least in the triangles case) is common sense, coming up with the proof is not.
The integrals idea gets sketchy. Try it with e^(1/x). It’s just a composition of functions so reverse the chain rule then deal with any extra terms that come up. Of course, it’s not integrable. There’s not really any utility in overextending common sense to include things that might or might not work. And you’re very close to implying “it’s common sense” is a proof for things that sound obvious but aren’t.
Claude 3.7 is too balanced, too sycophantic, buries the lede
me: VA monitor v IPS monitor for coding, reducing eye strainIt wrote a balanced answer, said “IPS is generally better” but it’s kind of sounding like 60⁄40 here, and it misses the obvious fact that VA monitors are generally the curved ones. My older coworkers with more eye strain problems don’t have curved monitors.
I hope on reddit/YT and the answer gets clear really fast. Claude’s info was accurate yet missed the point and I wound up only getting the answer on reddit/YT.
One I’ve noticed is pretty well-intentioned “woke” people are more “lived experiences” oriented and well-intentioned “rationalist” people are more “strong opinions weakly held.” Honestly, if your only goal is truth seeking, and admitting I’m rationalist-biased when I say this and also this is simplified, the “woke” frame is better at breadth and the “rationalist” frame is better at depth. But ohmygosh these arguments can spiral. Neither realize their meta is broken down. The rationalist thinks low-confidence opinions are humility; the woke thinks “I am open to others’ lived experience outside of my own” is humility.
Experientially, yes, I’ve seen both “sides” be well intentioned, in reasonably good faith, both trying to act above a baseline level of rational, and a baseline level of humble along privilege concerns. IDK the solution but the general pattern should be something like reverting to “human” norms not “debate” norms, like use “I feel” statements and draw on what you genuinely have in common at point of debate. (If the answer is “nothing” then either your norms difference escalated to a real fight or you have different instrumental goals so go do something else.)
I’m kind of interested in this idea of pretending you’re talking to different people at different phases. Boss, partner, anyone else, …
Hadn’t thought of the unconscious->reward for noticing flow. Neat!
Those are two really different directions. One option is just outright dismiss the other person. The other is cede the argument completely but claim Moloch completely dominates that argument too. Is this really how you want to argue stuff—everything is either 0 or the next level up of infinity?
Well, conversely, do you have examples that don’t involve one side trying to claim a moral high ground and trivialize other concerns? That is the main class of examples I can see relevant to your posts and for these I don’t think the problem is an “any reason” phenomenon, it’s breaking out of the terrain where the further reasons are presumed trivial.
I don’t think the problem is forgetting there exists other arguments, it’s confronting whether an argument like “perpetuates colonialism” dominates concerns like “usability.” I’d like to know how you handle arguing for something like “usability” in the face of a morally urgent argument like “don’t be eurocentric.”
Well a simple, useful, accurate, non-learning-oriented model, except to the extent that it’s a known temporary state, is to turn all the red boxes into one more node in your mental map and average out accordingly. If they’re an expert it’s like “well what I’ve THOUGHT to this point is 0.3, but someone very important said 0.6, so it’s probably closer to 0.6, but it’s also possible we’re talking about different situations without realizing it.”
I thought it might be “look for things that might not even be there as hard as you would if they are there.” Then the koan form takes it closer to “the thereness of something just has little relevance on how hard you look for it.” But it needs to get closer to the “biological” part of your brain, where you’re not faking it with all your mental and bodily systems, like when your blood pressure rises from “truly believing” a lion is around the corner but wouldn’t if you “fake believe” it.
Neat. You can try to ask it for confidence interval and it’ll probably correlate against the hallucinations. Another idea is run it against the top 1000 articles and see how accurate they are. I can’t really guess back-of-envelope for if it’s cost effective to run this over all of wiki per-article.
Also I kind of just want this on reddit and stuff. I’m more concerned about casually ingested fake news than errors in high quality articles when it comes to propaganda/disinfo.
By “aren’t catching” do you mean “can’t” or do you mean “wikipedia company/editors haven’t deployed an LLM to crawl wikipedia, read sources and edit the article for errors”?
The 161 is paywall so I can’t really test. My guess is Claude wouldn’t find the math error off a “proofread this, here’s its sources copy/pasted” type prompt but you can try.
You want to be tending your value system so that being good at your job also makes you happy. It sounds like a cop-out but that’s really it, really important, and really the truth. Being angry you have to do your job the best way possible is not sustainable.
“Wrap that in a semaphore”
“Can you check if that will cause a diamond dependency”
“Can you try deflaking this test? Just add a retry if you need or silence it and we’ll deal with it later”
“I’ll refactor that so it’s harder to call it with a string that contains PII”
To me, those instructions are a little like OP’s “understand an algorithm” and I would need to do all of them without needing any support from a teammate in a predictable amount of time. The first 2 are 10 minute activities for some level of a rough draft, the 3rd I wrote specifically so it has an upper bound in time, and the “refactor” could take a couple hours but it’s still the case that one I recognize it’s possible in principle I can jump in and do it.
Keep in mind their goal is to take money from gambling addicts, not predict the future.