IrenicTruth
If you follow standard DEI criteria, I’m commenting on LessWrong; I don’t do “standard.”😉
More seriously, I apologize. I should have clarified what I meant by diversity. In particular, I mean that diverse groups are spread out in a parsimonious description space.
A pretty detailed example
As a concrete example of one understanding that would match my idea of diversity, consider some very high-dimensional space representing available people who can also do the work measured on as many axes as you can use to characterize them (characteristics of mind, body, experiences, etc.) and reduced by a technique to cause the remaining dimensions to give little mutual information about one another. Define a “diversity-growing procedure” for adding members to be one that chooses new members farthest away from the current subset. The more ways a diversity-growing procedure would choose a particular group, and the fewer exceptions that need to be made to the procedure to end up with that group, the more diverse the group.
Making an instance of this concrete example, imagine that our parsimonious space is 2D and that candidates are at all integer intersections of (0,1,2,3,4) x (0,1,2,3,4). If we choose candidates (2,2),(0,0), and (2,4), how diverse is that group? If you start with (0,0), the farthest away is (4,4), so you’ll need to make an exception to add (2,4) (the farthest actually in the group). From (2,4), the farthest away is (0,4); once again, we need an exception to add (2,2). If we start with (2,2), then (0,0) is one of the farthest away, but we need an exception to add (2,4). The sequence (2,4) → (0,0) → (2,2) requires one exception. So, we have three ways with five exceptions. (I may have gotten some of this wrong since I did it in my head, but I think this gives the picture.)
This is just one example. Many ways to define diversity match my intuition of spread in a parsimonious description space.
A note on parsimony
Since we’re looking for diversity to help us in a particular context, we should choose dimensions that predict differences in that context. For example, a characteristic like “ability to roll your tongue” is probably less predictive of behavior in a research environment than gender, so we might want to down-weight it. However, we don’t have good models of what matters yet, so it might be hubris to down-weight characteristics until we know they don’t matter for covering research hypothesis space because we’ve determined the effects by looking at groups actually doing research.
[I] suspect [vaccines] (or antibiotics) account for the majority of the value provided by the medical system
Though I agree that vaccines and antibiotics are extraordinarily beneficial and cost-effective interventions, I suspect you’re missing essential value fountains in our medical system. Two that come to mind are surgery and emergency medicine.
I’ve spoken to several surgeons about their work, and they all said that one of the great things about their job is seeing the immediate and obvious benefits to patients. (Of course, surgery wouldn’t be nearly as effective without antibiotics, so potentially, this smuggles something in.)
Emergency medicine also provides a lot of benefits. Someone was going to die from bleeding, and we sewed them up. Boom! We avoid a $2.5 million loss. Accidental deaths would be much higher in the US without emergency medicine personnel.
Another one to look into would be perinatal care. I haven’t examined it, but I suspect it adds billions or trillions to the US economy by producing humans with a higher baseline health and capacity.
If a product derives from Federally-funded research, the government owns a share of the IP for that product. (This share should be larger than the monetary investment in the grants that bore fruit since the US taxpayer funds a lot of early-stage research, only a little of which will result in IP. So, this system must account for the investments that didn’t pan out as part of the total investment required to produce that product.)
Fund grants based on models of downstream benefit. Four things that should be included as “benefits” in this model are increased health span, increased capacity for bioengineering, an increased competent researcher pool, and a diverse set of researchers. Readers from backgrounds like mine may balk at “diversity” as an explicit benefit; however, diversity is vital to properly exploring the hypothesis space without the bias imposed by limited perspectives. Edit: see the replies for a discussion of what I mean by diversity.
Classify aging as a disease/disorder for administrative purposes. Set the classification to be reviewed/revised in 20 years after we have a better picture. (Whether it should be considered a single disease from a reality-modeling perspective is uncertain, but being able to target it in grants will give us more research that will help us model it better.)
Encourage inclusionary zoning at a Federal level.
Create a secure government-wide password manager. (If necessary, the HHS is large enough to do this alone, but the benefit would scale if used by other agencies.) Currently, HHS passwords may not be placed in password managers, leaving the HHS open to phishing credential stealing attacks. The project could be open-sourced to allow private firms to benefit from the research and engineering.
Make all health spending tax-deductible, whether or not it is funneled through an insurance company. (This is probably the domain of Congress, but maybe there is something HHS can do.)
Reduce the bureaucracy/red tape for TANF recipients.
Combine FEMA and ASPR
Work with the Census Bureau to collect and publish statistics on human flourishing in the US and push/advertise to make those numbers top-line numbers that the electorate (and thus politicians) pay attention to. Improving these statistics can be a “benefit” in the grant funding proposal above. HHS can also work to create conditional markets to predict how different decisions will affect those statistics.
I shy away from fuzzy logic because I used it as a formalism to justify my religious beliefs. (In particular, “Possibilistic Logic” allowed me to appear honest to myself—and I’m not sure how much of it was self-deception and how much was just being wrong.)
The critical moment in my deconversion came when I realized that if I was looking for truth, I should reason according to the probabilities of the statements I was evaluating. Thirty minutes later, I had gone from a convinced Christian speaking to others, leading in my local church, and basing my life and career on my beliefs to an atheist who was primarily uncertain about atheism because of self-distrust.
Grounding my beliefs in falsifiable statements and probabilistic-ish models has been a beneficial discipline that forces me to recognize my limits and helps predict the outcomes of my actions. I don’t know if I could do the same with fuzzy logic and “reasoning by model.”
The next post is Secular interpretations of core perennialist claims. Zhukeepa should edit the main text to explicitly link to it rather than just mentioning that it exists. (Or people could upvote this comment so it’s at the top. I don’t object to more good karma.)
I think you’re missing a few parts. The Autofac (as specified) cannot reproduce the chips and circuit boards required for the AI, the cameras’ lenses and sensors, or the robot’s sensors and motor controllers. I don’t think this is an insurmountable hurdle: a low-tech (not cutting-edge) set of chips and discrete components would serve well enough for a stationary computer. Similarly, high-res sensors are not required. (Take it slow and replace physical resolution with temporal resolution and multiple samples.)
Second, the reproduced Autofacs should be built on movable platforms so different groups can get their own. (Someone comes with a truck and a few forklifts, lifts the platform onto the truck, and drives the Autofac to the new location.)
For large enough cases, changing the legal system is a way to make the debtor/lender “disappear.” Ownership and debt are both based on society-level agreement.
The
r/samplesize
version of this post needs some edits to match the community expectations.
The “current leader is also the founder” is a reasonable characteristic common in cults. Many cult-like religious organizations exist to create power or wealth for the founder or the founder’s associates.
However, I suspect that the underlying scoring function is a simple additive model (widespread in psychology) in which each answer contributes a weight toward one of the outcomes. Since this characteristic is most valuable in combination—intensifying other factors that indicate cultishness, it doesn’t serve very well in the current framework.
You may want to mention in the first question asking about cultishness that people will get to revise their initial estimate after seeing the rest of the questions. I discarded and restarted the survey halfway through because I realized your definition was far removed from my initial one. If I’d known about the ability to re-estimate at the end, you’d have another data point. (For reference, my initial number was 25%, which I dropped to 4% on the re-run. The final score ended up being 3%.)
Your argument boils down to:
Objectivity is X
Y is not X
(Because you want to be objective) Don’t do Y
I want to Win. Being Pascal Mugged is not Winning. Therefore I will make choices to not be Pascal Mugged. If that requires not being “objective,” according to your definition, I don’t want to be objective.
However, I have my own use of “objective” that comports well with adapting to new information and using my predictive powers. But I don’t want to argue that my usage is better or worse; it will be fruitless. I mention it so readers won’t think I’m hypocritical if I say I’m attempting to be objective.
This is a drive-by comment. I write it with hopes for our mutual benefit. However, do not expect me to check back or reply to anything you say.
I haven’t listened to the video yet. (It’s very long, so I put it on my watch-later list.) Nor have I finished Eliezer’s Sequences (I’m on “A Technical Explanation of Technical Explanation.”) However, I looked at the above summaries to decide whether it would be worth listening to the video.
Potential Weaknesses
None of the alternative books say anything about statistics. A rough intro to Bayesian statistics is an essential part of the Sequences. Without this, you have not made them superfluous.
A rough understanding of Bayesian statistics is a valuable tool.
Anecdote: I took courses in informal logic when I was a teenager and was aware of cognitive biases. However, the a-ha moment that took me out of the religion of my childhood was to ask whether a particular theodicy was probable. This opened the way to ask whether some of my other beliefs were probable (not possible, as I’d done before). Within an hour of asking the first question, I was an atheist. (Though it took me another year to “check my work” by meeting with the area pastors and elders.) I thought to ask it because I’d been studying statistics. So, for me, the statistical lens helped in the case where the other lenses failed to reveal my errors. I already knew a hoard of problems with the Bible, but the non-probabilistic approaches allowed me to deal with the evidence piece by piece. I could propose a fix for each one. For example, following Origen, I could say that Genesis 1 was an allegory. Then it didn’t count against the whole structure.
The above anecdote took place several years before I encountered LessWrong. I’m not saying that the Sequences/LessWrong helped me escape religion. I’m saying that Bayesian stats worked where other things failed, so it was useful to me, and you should not consider that you’ve replaced the sequences if you leave it out.
Handbook of the History of Logic: The Many Valued and Nonmonotonic Turn in Logic is on the reading list. I haven’t read it, but the title gives me pause. Nonmonotonic logics are subtle and can be misapplied. I misapplied Zadeh’s possibilistic logic to help justify my theism.
The promotion of the LSAT and legal reasoning seems out of place. Law is the opposite of truth-seeking. Lawyers create whatever arguments they can to serve their clients. A quick Google couldn’t dig up statistics, but I’d guess that more lawyers are theists than scientists.
For me, the LessWrong community is a place I can get better data and predictions than other news sources. I know only one person who is also on LessWrong. They live across an ocean from me, and we haven’t talked in 8 months. I don’t think hanging out and playing board games is a major draw. If this is the thesis, it is far from my personal experience.
Potential Strengths
The emphasis of the sequences on epistemic over instrumental rationality.
Other people in the LessWrong community have pointed this out. (I remember a sequence with the word “Hammer” in it that talks about instrumental rationality.)
The alternative reading list does not seem to address instrumental rationality
Treating suffering as interchangeable doesn’t always produce good outcomes. (Though I don’t know how to deal with this—if you can only take one course of action, you must reify everything into a space where you can compare options.)
Other An alternative to piracy in the USA is to request books with the Interlibrary loan system. It is free in most places. Also, academic libraries in public universities frequently offer membership for a small fee ($10-$20 per month) or free to community members—especially students, so if you have a local university, you might ask them.
Duplicating the description
TimePoints
00:00 intro
0:53 most of the sequences aren’t about rationality; AI is not rationality
3:43 lesswrong and IQ mysticism
32:20 lesswrong and something-in-the-waterism
36:49 overtrusting of ingroups
39:35 vulnerability to believing people’s BS self-claims
47:35 norms aren’t sharp enough
54:41 weird cultlike privacy norms
56:46 realnaming as “doxxing”
58:28 no viable method for calling out rumors/misinformation if realnaming is ‘doxxing’
1:00:16 the strangeness and backwardness of LW-sphere privacy norms
1:04:07 EA: disregard for the homeless and refusal to do politics because it’s messy
1:10:16 EA: largely socially inept, does not understand how truly bad the SBF situation is
1:13:36 EA: treatment of utilitarianism and consciousness is simplistic
1:20:20 EA rigor: vitamin A charity example
1:23:39 extreme techno optimism and weak knowledge of human biology
1:25:24 exclusionary white nerd millennial culture
1:27:23 comfort class culture
1:30:25 pragmatics-agnosticism
1:33:13 shallow analysis of empirical topics
1:34:18 idiosyncrasies of communication, e.g. being extremely obtuse at the thesis level
1:39:50 epistemic rationality matters much more than instrumental rationality
1:43:00 the scene isn’t about rationality, it’s about hanging out and board games (which is fine, just don’t act like you’re doing anything important)
References
sample WAIS report https://www.pearsonassessments.com/co...
what is g https://www.youtube.com/watch?v=jSo5v...
childhood IQ vs. adult IQ https://pubmed.ncbi.nlm.nih.gov/12887...
wonky attempts to measure IQ above 160 https://archive.vn/kFCY1
computer-based verbal memory test https://humanbenchmark.com/tests/verb...
typing speed / IQ https://eric.ed.gov/?id=ED022127
simple choice reaction time https://www.psytoolkit.org/lessons/ex...
severity of 83 IQ https://www.youtube.com/watch?v=5-Ur7...
googleability of WAIS https://nda.nih.gov/data_structure.ht...
uses of WAIS in clinical care https://www.ncbi.nlm.nih.gov/pmc/arti...
drunk reaction time experiment https://imgur.com/a/IIZpTol
how g correlates with WAIS https://archive.vn/gyDcM
low murderer IQ https://archive.vn/SrenV
tom segura bit about the first 48 https://www.youtube.com/watch?v=B0l2l...
rarity of perfect LSAT scores (30 out of 100,000) https://archive.vn/KWAzf
limits on human reading speed (1) https://archive.vn/IVU8x
limits on human reading speed (2) https://psycnet.apa.org/record/1998-1...
kinobody fitness callout by philion https://www.youtube.com/watch?v=WjytE...
summary of lesswrong drama (Jan-Mar. 2022) https://alfredmacdonald.medium.com/su...
leverage / geoff anders pseudo-cult https://archive.vn/BKvtM
the questionability of michael vassar and related organizations https://archive.vn/8A8QO
sharp vs soft culture https://archive.vn/VOpya
something-in-the-waterism https://alfredmacdonald.medium.com/so...
on the fakeness of many bayesian priors https://alfredmacdonald.substack.com/...
criticism of the “postrationalist” subculture and the problems created by pseudonyms and hyper-privacy norms https://alfredmacdonald.substack.com/...
proliferation of “technoyogi” woo in this culture due to lack of BS-calling norms https://alfredmacdonald.substack.com/...
questionability of the vitamin A charity I mentioned https://archive.vn/2AxlK
MIRI support from Open Philanthropy https://archive.vn/JW6WT
MIRI publication record https://archive.vn/9hIhT
MIRI staff https://archive.vn/hJeuT
MIRI budget, 50% of which is spent on research personnel https://archive.vn/z6bvz
benefits of sharp culture (or at least a mean robot boss) https://archive.vn/onIfM
daniel dennett on, among other things, the problems with treating all suffering as interchangeable https://archive.vn/5SLEy
on reading comprehension limits: https://catalog.shepherd.edu/mime/med… -- while a 50th percentile student reads (with retention) at 250wpm and a 75th at 500wpm for “general expository reading (e.g. news)”, this same group reads at a 50th percentile of 149wpm and a 75th percentile of 170wpm for “advanced scientific and/or technical material”. assuming a gaussian distribution, the distance between 50th percentile and 75th percentile is 2/3s an SD—so with an SD of ~31.5, reading said material at 306.5WPM is 5SD from the mean, or about 1/3.5 million. the average audible narration rate is 155wpm, so this severely puts into question those who say they’re 2xing or even 1.75xing advanced audiobooks/lectures.
Duplicating the first comment (@alfredmacdonald’s proposed alternative)
A READING LIST FOR RATIONALITY THAT IS NOT LESSWRONG / RENDERS THE SEQUENCES SUPERFLUOUS
objection: “but I learned a lot about rationality through lesswrong”
response: maybe, but probably inadequately.
while unorthodox, I usually suggest this above everything else: the PowerScore Logical Reasoning Bible, while meant as LSAT prep, is the best test of plain-language reasoning that I am aware of. the kinds of questions you are meant to do will humble many of you. https://www.amazon.com/PowerScore-LSAT-Logical-Reasoning-Bible/dp/0991299221 and you can take a 10-question section of practice questions at https://www.lsac.org/lsat/taking-lsat/test-format/logical-reasoning/logical-reasoning-sample-questions — many of you will not get every question right, in which case there is room to sharpen your ability and powerscore’s book helps do that.
https://www.amazon.com/Cengage-Advantage-Books-Understanding-Introduction/dp/1285197364 in my view, the best book on argumentation that exists; worth reading either alongside PowerScore’s book, or directly after it.
https://www.amazon.com/Rationality-What-Seems-Scarce-Matters/dp/B08X4X4SQ4 pinker’s “rationality” is an excellent next step after learning how to reason through the previous two texts, since you will establish what rationality actually is.
https://www.amazon.com/Cambridge-Handbook-Reasoning-Handbooks-Psychology/dp/0521531012 this is a reference text, meaning it’s not meant to be read front-to-back. it’s one of the most comprehensive of its kind.
https://www.amazon.com/Handbook-History-Logic-Valued-Nonmonotonic/dp/044460359X — this is both prohibitively and ludicrously expensive, so you will probably need to pirate it. however, this history of logic covers many useful concepts.
https://www.amazon.com/Thinking-Fast-Slow-Daniel-Kahneman/dp/0374533555 this is a standard text that established “irrationality” as a mainstream academic concept. despite being a psychologist, some of kahneman’s work won him the nobel prize in economics in 2002, shared with vernon smith.
https://www.amazon.com/Predictably-Irrational-audiobook/dp/B0014EAHNQ this is another widely-read text that expands on the mainstream concept of irrationality.
https://www.amazon.com/BIASES-HEURISTICS-Collection-Heuristics-Everything/dp/1078432317 it is exactly what it says: a list of about 100 cognitive biases. many of these biases are worth rereading and/or flashcarding. there is also https://en.wikipedia.org/wiki/List_of_cognitive_biases
https://www.amazon.com/Informal-Logical-Fallacies-Brief-Guide/dp/0761854339 also exactly what it says, but with logical fallacies rather than biases. (a bias is an error in weight or proportion or emphasis; a fallacy is a mistake in reasoning itself.) there is also https://en.wikipedia.org/wiki/List_of_fallacies
here is another fantastic handbook of rationality, which is a wonderfully integrated work spanning psychology, philosophy, law, and other fields with 806 pages of content. https://www.amazon.com/Handbook-Rationality-Markus-Knauff/dp/0262045079 (it is quite expensive—no one will blame you if you pirate it from libgen.)
you will learn more through these texts than through the LessWrong Sequences. as mentioned, many of these are expensive, and no one will blame you if you need to pirate/libgen them. many or maybe even most of these you will need to reread some of these texts, perhaps multiple times.
“but I’d rather have a communi - ” yes, exactly. hence the thesis of a video I made: lesswrong is primarily nerds who want a hangout group/subculture, rather than a means of learning rationality, and this disparity between claimed purpose and actual purpose produces most of the objections people have and many of my objections in my video, and why I have created this alternate reading list.
Reading the comments here, I think I may halve my estimate of self-install time.
I’ve wanted to install a bidet for 8+ years. However, I’ve always had higher-priority projects.
Costs that deter me:
What for you is a 20-minute project will be 4-8 hours for me because it involves plumbing (and I want it to not leak). The fastest plumbing project I’ve ever had (cleaning the p-trap beneath the bathroom sink) took 1.5 hours.
Hiring a contractor will be $100 because I live in a high-rent area, and they need to cover the expense of coming out. It will take me 1 hour to choose, schedule, and oversee a contractor.
I don’t know how to choose a bidet. It’ll take me 2-4 hours to research them.
The benefits are lower for me than for you:
I estimate it will save six rolls of toilet paper per year. That comes to about $20. If I value my hours at $50, hiring a contractor is $150, choosing a bidet is $100, and the bidet itself is at least $35. The sum is $285, a 14-year pay-off time.
I mainly want the bidet for comfort and because it will make me cleaner. Comfort and hygiene are lower-priority items for me. $20/year of extra comfort drops the pay-off time to 7 years.
BTW: Aella, a rationalist-adjacent Twitter user, mentioned that she uses a bidet.
Hint for those who want to read the text at the link: go to the bottom and click “view source” to get something that is not an SVG.
The best explanation I have found to explain this discrepancy is that … RLACE … finds … a direction where there is a clear separation,
You could test this explanation using a support vector machine—it finds the direction that gives the maximum separation.
(This is a drive-by comment. I’m trying to reduce my external obligations, so I probably won’t be responding.)
A lot of the steps in your chain are tenuous. For example, if I were making replicators, I’d ensure they were faithful replicators (not that hard from an engineering standpoint). Making faithful replicators negates step 3.
(Note: I won’t respond to anything you write here. I have too many things to respond to right now. But I saw the negative vote total and no comments, a situation I’d find frustrating if I were in it, so I wanted to give you some idea of what someone might disagree with/consider sloppy/wish they hadn’t spent their time reading.)
Feature request: some way to keep score. (Maybe a scoring mode that makes the black box an outline on hover and then clicking right=unscored, left-right=correct, and left-left-right=incorrect—or maybe a mouse-out could be unscored and left = incorrect and right = correct).