Nick_Tarleton
I’ve gotten things from Michael’s writing on Twitter, but also wasn’t distinguishing him/Ben/Jessica when I wrote that comment.
I can attest to something kind of like this; in mid-late 2020, I
already knew Michael (but had been out of touch with him for a while) and was interested in his ideas (but hadn’t seriously thought about them in a while)
started doing some weird intense introspection (no drugs involved) that led to noticing some deeply surprising things & entering novel sometimes-disruptive mental states
noticed that Michael/Ben/Jessica were talking about some of the same things I was picking up on, and started reading & thinking a lot more about their online writing
(IIRC, this noticing was not entirely conscious — to some extent it was just having a much stronger intuition that what they were saying was interesting)
didn’t directly interact with any of them during this period, except for one early phone conversation with Ben which helped me get out of a very unpleasant state (that I’d gotten into by, more or less, decompartmentalizing some things about myself that I was unprepared to deal with)
I have understood and become convinced of some of Michael’s/Ben’s/Jessica’s stances through a combination of reading their writing and semi-independently thinking along similar lines, during a long period of time when I wasn’t interacting with any of them, though I have interacted with all of them before and since.
… those posts are saying much more specific things than ‘people are sometimes hypocritical’?
“Can crimes be discussed literally?”:
some kinds of hypocrisy (the law and medicine examples) are normalized
these hypocrisies are / the fact of their normalization is antimemetic (OK, I’m to some extent interpolating this one based on familiarity with Ben’s ideas, but I do think it’s both implied by the post, and relevant to why someone might think the post is interesting/important)
the usage of words like ‘crime’ and ‘lie’ departs from their denotation, to exclude normalized things
people will push back in certain predictable ways on calling normalized things ‘crimes’/‘lies’, related to the function of those words as both description and (call for) attack
“There is a clear conflict between the use of language to punish offenders, and the use of language to describe problems, and there is great need for a language that can describe problems. For instance, if I wanted to understand how to interpret statistics generated by the medical system, I would need a short, simple way to refer to any significant tendency to generate false reports. If the available simple terms were also attack words, the process would become much more complicated.”
This seems ‘unsurprising’ to me in, and only in, an antimemetic Everybody Knows sense.
“Guilt, Shame, and Depravity”:
hypocrisy is often implemented through internal dissociation (shame)
ashamed people form coalitions around a shared interest in hiding information
[some modeling of/claims about how these coalitions work]
[some modeling of the incentives/conditions that motivate guilt vs. shame]
This is a bit more detailed than ‘people are sometimes hypocritical’; and I don’t think of the existence of ashamed coalitions to cover up norm violations in general (as opposed to relatively-more-explicitly-coordinated coalitions to cover up more-specific kinds of violations) as a broadly unsurprising claim. The degree to which shame can involve forgetting one’s own actions & motives, which Ben describes, certainly felt like a big important surprise when I (independently, two years before that post) consciously noticed it in myself.
My guess would be that the bailey is something like “everyone is 100% hypocritical about everything 100% of the time, all people are actually 100% stupid and evil; except maybe for the small group of people around Michael Vassar” or something like that.
I haven’t picked up this vibe from them at all (in writing or in person); I have sometimes picked up a vibe of ‘we have uniquely/indispensably important insights’. YMMV, of course.
Embarrassingly, that was a semi-unintended reaction — I would bet a small amount against that statement if someone gave me a resolution method, but am not motivated to figure one out, and realized this a second after making it — that I hadn’t figured out how to remove by the time you made that comment. Sorry.
It sounds to me like the model is ‘the candidate needs to have a (party-aligned) big blind spot in order to be acceptable to the extremists(/base)‘. (Which is what you’d expect, if those voters are bucketing ‘not-seeing A’ with ‘seeing B’.)
(Riffing off from that: I expect there’s also something like, Motive Ambiguity-style, ‘the candidate needs to have some, familiar/legible(?), big blind spot, in order to be acceptable/non-triggering to people who are used to the dialectical conflict’.)
if my well-meaning children successfully implement my desire never to die, by being uploaded, and “turn me on” like this with sufficient data and power backups but lack of care; or if something else goes wrong with the technicians involved not bothering to check if the upload was successful in setting up a fully virtualized existence complete with at least emulated body sensations, or do not otherwise check from time to time to ensure this remains the case;
These don’t seem like plausible scenarios to me. Why would someone go to the trouble of running an upload, but be this careless? Why would someone running an upload not try to communicate with it at all?
A shell in a Matrioshka brain (more generally, a Dyson sphere being used for computation) reradiates 100% of the energy it captures, just at a lower temperature.
The AI industry people aren’t talking much about solar or wind, and they would be if they thought it was more cost effective.
I don’t see them talking about natural gas either, but nuclear or even fusion, which seems like an indication that whatever’s driving their choice of what to talk about, it isn’t short-term cost-effectiveness.
I doubt it (or at least, doubt that power plants will be a bottleneck as soon as this analysis says). Power generation/use varies widely over the course of a day and of a year (seasons), so the 500 GW number is an average, and generating capacity is overbuilt; this graph on the same EIA page shows generation capacity > 1000 GW and non-stagnant (not counting renewables, it declined slightly from 2005 to 2022 but is still > 800 GW):
This seems to indicate that a lot of additional demand[1] could be handled without building new generation, at least (and maybe not only) if it’s willing to shut down at infrequent times of peak load. (Yes, operators will want to run as much as possible, but would accept some downtime if necessary to operate at all.)
This EIA discussion of cryptocurrency mining (estimated at 0.6% to 2.3% of US electricity consumption!) is highly relevant, and seems to align with the above. (E.g. it shows increased generation at existing power plants with attached crypto mining operations, mentions curtailment during peak demand, and doesn’t mention new plant construction.)
- ^
Probably not as much as implied by the capacity numbers, since some of that capacity is peaking plants and/or just old, meaning not only inefficient, but sometimes limited by regulations in how many hours it can operate per year. But still.
- ^
Confidentiality: Any information you provide will not be personally linked back to you. Any personally identifying information will be removed and not published. By participating in this study, you are agreeing to have your anonymized responses and data used for research purposes, as well as potentially used in writeups and/or publications.
Will the names (or other identifying information if it exists, I haven’t taken the survey) of the groups evaluated potentially be published? I’m interested in this survey, but only willing to take it if there’s a confidentiality assurance for that information, even independent of my PII. (E.g., I might want to take it about a group without potentially contributing to public association between that group and ‘being a cult’.)
The hypothetical bunker people could easily perform the Cavendish experiment to test Newtonian gravity, there just (apparently) isn’t any way they’d arrive at the hypothesis.
As a counterpoint, I use Firefox as my primary browser (I prefer a bunch of little things about its UI), and this is a complete list of glitches I’ve noticed:
The Microsoft account login flow sometimes goes into a loop of asking me for my password
Microsoft Teams refuses to work (‘you must use Edge or Chrome’)
Google Meet didn’t used to support background blurring, but does now
A coworker reported that a certain server BMC web interface didn’t work in Firefox, but did in Chrome (on Mac) — I found (on Linux, idk if that was the relevant difference) it broke the same way in both, which I could get around by deleting a modal overlay in the inspector
(I am not a lawyer)
The usual argument (e.g.) for warrant canaries being meaningful is that the (US) government has much less legal ability to compel speech (especially false speech) than to prohibit it. I don’t think any similar argument holds for private contracts; AFAIK they can require speech, and I don’t know whether anything is different if the required speech is known by both parties to be false. (The one relevant search result I found doesn’t say there’s anything preventing such a contract; Claude says there isn’t, but it could be thrown out on grounds of public policy or unconscionability.)
I would think this ‘canary’ still works, because it’s hard to imagine OpenAI suing, or getting anywhere with a suit, for someone not proactively lying (when silence could mean things besides ‘I am subject to an NDA’). But, if a contract requiring false speech would be valid,
insofar as this works it works for different reasons than a warrant canary
it could stop working, if future NDAs are written with it in mind
(Quibbles aside, this is a good idea; thanks for making it!)
Upvoted, but weighing in the other direction: Average Joe also updates on things he shouldn’t, like marketing. I expect the doctor to have moved forward some in resistance to BS (though in practice, not as much as he would if he were consistently applying his education).
And the correct reaction (and the study’s own conclusion) is that the sample is too small to say much of anything.
(Also, the “something else” was “conventional treatment”, not another antiviral.)
I find the ‘backfired through distrust’/‘damaged their own credibility’ claim plausible, it agrees with my prejudices, and I think I see evidence of similar things happening elsewhere; but the article doesn’t contain evidence that it happened in this case, and even though it’s a priori likely and worth pointing out, the claim that it did happen should come with evidence. (This is a nitpick, but I think it’s an important nitpick in the spirit of sharing likelihood ratios, not posterior beliefs.)
if there’s a domain where the model gives two incompatible predictions, then as soon as that’s noticed it has to be rectified in some way.
What do you mean by “rectified”, and are you sure you mean “rectified” rather than, say, “flagged for attention”? (A bounded approximate Bayesian approaches consistency by trying to be accurate, but doesn’t try to be consistent. I believe ‘immediately update your model somehow when you notice an inconsistency’ is a bad policy for a human [and part of a weak-man version of rationalism that harms people who try to follow it], and I don’t think this belief is opposed to “rationalism”, which should only require not indefinitely tolerating inconsistency.)
Among ‘hidden actions OpenAI could have taken that could (help) explain his death’, I’d put harassment well above murder.
Please don’t do this.