My kids were playing with an AI… not sure which one, probably Gemini… and I felt too lazy to read them a goodnight story, so I told them to ask the AI to write them one.
They asked the AI for a story about a princess who fell in love with a wizard, or something like that, but the AI told them that it refuses to generate sexually suggestive texts.
So instead of saving a few minutes of book reading, now I had to explain to them what “sexually suggestive” means, and why the AI refuses to do that. Thanks a lot, AI.
I would like to see a page like TalkOrigins, but about IQ. So that any time someone confused but generally trying to argue in good faith posts something like “but wasn’t the idea of intelligence disproved scientifically?” or “intelligence is a real thing, but IQ is not” or “IQ is just an ability to solve IQ tests” or “but Taleb’s article/tweet has completely demolished the IQ pseudoscience” or one of the many other versions… I could just post this link. Because I am tired of trying to explain, and the memes are going to stay here for a foreseeable future.
If you dismiss ideas coming from outside academia as non-scientific, you have a point. Those ideas were not properly tested, peer-reviewed, etc.
But if you dismiss those ideas as not worth scientists’ attention, you are making a much stronger statement. You are effectively making a positive statement that the probability of those ideas being correct is smaller than 5%. You may be right, or you may be wrong, but it would be nice to provide some hint about why you think so. Are you just dissing the author; or do we have an actual historical experience that among ideas coming from a certain reference group, less than 1 in 20 turns out to be correct?
Why 5%? Let’s do the math. Suppose that we have a set of hypotheses out of which about 5% are true. We test them, using a p=0.05 threshold for publishing. That means, out of 10000 hypotheses, about 500 are true, and let’s assume that all of them get published; and about 9500 are false, and about 475 of them get published. This would result in approximately 50% failure in replication… which seems to be business as usual in certain academic journals?
So if it is perfectly okay for scientists to explore ideas that have about 5% chance of being correct, then by saying that certain idea should not be explored scientifically, you seem to suggest that its probability is much smaller.
Note that this is different from expecting an idea to turn out to be correct. If an idea has a 10% chance of being correct, it means that I expect it to be wrong, and yet it makes sense to explore the idea seriously.
(This is a condensed version of an argument I made in a week old thread, so I wanted to give it a little more visibility. On top of that, I suspect that status concerns can make a great difference in scientist’s incentives. Exploring ideas originating in academia that have a 5% chance of being right is… business as usual. Exploring ideas originating outside of academia that have a 5% chance of being right will make you look incompetent if they turn out to be wrong, which indeed is the likely outcome. No one ever got fired for writing a thesis on IBM, so to speak.)
If publication and credit standards were changed, we’d see more scientists investigating interesting ideas from both within and outside of academia. The existing structure makes scientists highly conservative in which ideas they test from any source, which is bad when applied to ideas from outside academia—but equally bad when applied to ideas from inside academia.
5% definitely isn’t the cutoff for which ideas scientists actually do test empirically.
Throwing away about 90% of your empirical work (total minus real hits and false alarms from your 5%) would be a high price to pay for exploring possibly-true hypotheses. Nobody does that. Labs in cognitive psychology and neuroscience, the fields I’m familiar with, publish at least half of their empirical work (outside of small pilot studies, which are probably a bit lower).
People don’t want to waste work so they focus on experiments that are pretty likely to “work” by getting “’significant” results at the p<.05 level. This is because they can rarely publish studies that show a null effect, even if they’re strong enough to establish that any effect is probably too small to care about.
So it’s really more like a 50% chance base rate. This is heavily biased toward exploitation of existing knowledge rather than exploration toward new knowledge.
And this is why scientists mostly ignore ideas from outside of academia. They are very busy working hard to keep a lab afloat. Testing established and reputable ideas is much better business than finding a really unusual idea and demonstrating that it’s right, given how often that effort would be wasted.
The solution is publishing “failed” experiments. It is pretty crazy that people keep wasting time re-establishing which ideas aren’t true. Some of those experiments would be of little value, since they really can’t say if there’s a large effect or not; but that would at least tell others where it’s hard to establish the truth. And bigger, better studies finding near-zero effects could offer almost as much information as those finding large and reliable effects. The ones of little value would be published in lesser venues and so be less important on a resume, but they’d still offer value and show that you’re doing valuable work.
The continuation of journals as the official gatekeepers of what information you’re rewarded for sharing is a huge problem. Even the lower-quality ones are setting a high bar in some senses, by refusing even to print studies with inconclusive results. And the standard is completely arbitary in celebrating large effects while refusing to even publish studies of the same quality that give strong evidence of near-zero effects.
It gets very complicated when you add in incentives and recognize that science and scientists are also businesses. There’s a LOT of the world that scientists haven’t (or haven’t in the last century or so) really tried to prove, replicate, and come to consensus on.
AlphaFold doesn’t come out of academia. That doesn’t make it non-scientific. As Feymann said in his cargo-cult science speech, plenty of academic work is not properly tested. Being peer-reviewed doesn’t make something scientific.
Conceptually, I think you are making a mistake when you treat ideas and experiments as the same and equate the probability of an experiment finding a result as the same as the idea being true. Finding a good experiment to do to test an idea is nontrivial.
A friend of mine was working in a psychology lab and according to my friend the professor leading the lab was mostly trying to p-hack her way into publishing results.
Another friend, spoke approvingly of the work of the same professor because the professor managed to get Buddhist ideas into academic psychology and now the official scientific definition of the term resembles certain Buddhist notions.
The professor has a well-respected research career in her field.
I think its important to disambiguate searching for new problems and searching for new results.
For new results: while I have as little faith in academia as the next guy, I have a web of trust in other researchers who I know do good work, and the rate of their work being correct is much higher. I also give a lot of credence to their verification / word of mouth on experiments. This web of trust is a much more useful high pass filter for understanding the state of the field. I have no such filter for results outside of academia. When searching for new concrete information, information outside of academia is not worth scientists interests due to lack of trust / reputation
When it comes to searching for new hypotheses / problems, an important criterion is how much you personally believe in your direction. You never practically pursue ideas with 10% probability: you ideally pursue ideas you think have a fifty percent probability but your peers believe have a 15% probability. (This assumes you have high risk tolerance like I do, and are okay with a lot of failure. Otherwise, do incremental research). For problem generation, varied sources of information are useful, but the belief must come intrinsically.
When searching for interesting results to verify and replicate, its open season.
As a result, I think that ideas outside academia are not useful to researchers unless the researchers in question have a comparative advantage at synthesizing those ideas into good research inspiration.
As for nonideal reasons for ignoring results outside academia, I would more blame reviewers rather than vague “status concerns” and a general low appetite for risk tolerance despite working in an inherently risky profession of research.
Well, ideas from outside the lab, much less academia, are unlikely to be well suited to that lab’s specific research agenda. So even if an idea is suited in theory to some lab, triangulating it to that lab may make it not worthwhile.
There are a lot of cranks and they generate a lot of bad ideas. So a < 5% probability seems not unreasonable.
Perhaps the mental health diagnoses should be given in percentiles.
Some people complain that the definitions keep expanding, so that these days too many kids are diagnosed with ADHD or autism. The underlying reason is that these things seem to be on a scale, so it is arbitrary where you draw the line, and I guess people keep looking at those slightly below the line and noticing that they are not too different from those slightly above the line, and then they insist on moving the line.
But the same thing does not happen with IQ, despite the great pressure against politically incorrect results, despite the grade inflation at schools. That is because IQ is ultimately measured in percentiles. No matter how much pressure there is to say that everyone is above the average, the math only allows 50% of people to be smarter than the average, only 2% to be smarter than 98%, etc.
Perhaps we should do the same with ADHD and autism, too. Provide the diagnosis in form of: “You are more hyperactive than 85% of the population”, controlled for age, maybe also for sex if the differences are significant. So you would e.g. know that yes, your child is more hyperactive than average, but not like super exceptionally hyperactive, because there are two or three kids with a comparable diagnosis in every classroom. That would provide more useful information than simply being told “no” in 1980, or “yes” in 2020.
Objection: Some things are not linear! People are already making this objection about autism. It is also popular to make it against intelligence, and although the IQ researchers are not impressed, this meme refuses to die. So it is very likely that the moment we start measuring something on a linear scale, someone will make this objection. My response is that I do not see how a linear scale is worse than a binary choice (a degenerate case of linear scale) that we have now.
A better objection is that some things are… uhm, let me give you an example: You hurt your hand very painfully, so you ask a doctor whether it is broken. The doctor looks at an x-ray and says: “well, it is more broken than hands of 98% percent of the population”. WTF, was that supposed to be a yes or no?
So, the percentiles can also hide an important information, especially when the underlying data are bimodal or something like that. Perhaps in such cases it would help to provide a histogram with the data and a mark saying “you are here”, with the percentile.
It seems that the broken hand example is similar to situations where we have a deep understanding of the mechanics of how something works. In those situations, it makes more sense to say “this leg is broken; it cannot do 99% of the normal activities of daily living.” And the doctor can probably fix the leg with pins and a cast without much debate over exactly how disabled the patient is.
Yeah, having or not having a gears model makes a big difference. If you have the model, you can observe each gear separately, for example look at a hurting hand and say how damaged are bones, ligaments, muscles, skin. If you don’t have a gears model, then there is just something that made you pay attention to the entire thing, so in effect you kinda evaluate “how much this matches the thing I have in my mind”.
For example, speaking of intelligence, I have heard a theory that it is a combination of neuron speed and short term memory size. No idea whether this is correct or not, but using it as a thought experiment, suppose that it is true and one day we find out exactly how it works… maybe that day we will stop measuring IQ and start measuring neuron speed and short term memory size separately. Perhaps instead of giving people a test, we will measure the neuron speed directly using some device. We will find people who are exceptionally high at one of these things and low at the other, and observing them will allow us to even better understand how this all works. (Why haven’t we found such people already, e.g. using factor analysis? Maybe they are rare in nature, because the two things strongly correlate. Or maybe it is very difficult to distinguish them by looking at the outputs.)
Similarly, a gears model might split the diagnosis of ADHD into three separate numbers, and autism into seven. (Numbers completely made up.) Until then, we only have one number representing the “general weirdness in this direction”. Or a boolean representing “this person seems weird”.
I don’t think we can measure most of these closely enough, and I think the symptom clustering is imperfect enough that this doesn’t provide enough information to be useful. And really, neither does IQ—I mean it’s nice to know that one is smart, or not, and have an estimate of how different from the average one is, but it’s simply wrong to take any test result at face value.
In fact, you do ask the doctor if your hand is broken, but the important information is not binary. It’s “what do I do to ensure it heals fully”. Does it require surgery, a cast, or just light duty and ice? These activities may be the same whether it’s a break, a soft-tissue tear, or some other injury.
Likewise for mental health—the important part of a diagnosis isn’t “how severe is it on this dimension”, but “what interventions should we try to improve the patient’s experience”? The actual binary in the diagnosis is “will insurance pay for it”, not “what percent of the population suffers this way”.
If you want to know whether someone would benefit from a drug or other mental treatment the percentage is irrelevant.
Diagnoses are used to determine whether insurance companies have to pay for treatment. The percentage shouldn’t matter as much as whether the treatment is helpful for the patient.
Moving a comment away from the article it was written under, because frankly it is mostly irrelevant, but I put too much work into it to just delete it.
But occasionally I hear: who are you to give life advice, your own life is so perfect! This sounds strange at first. If you think I’ve got life figured out, wouldn’t you want my advice?
How much your life is determined by your actions, and how much by forces beyond your control, that is an empirical question. You seem to believe it’s mostly your actions. I am not trying to disagree here (I honestly don’t know), just saying that people may legitimately have either model, or a mix thereof.
If your model is “your life is mostly determined by your actions”, then of course it makes sense to take advice from people who seem to have it best, because those are the ones who probably made the best choices, and can teach you how to make them, too.
If your model is “your life is mostly determined by forces beyond your control”, then the people who have it best are simply the lottery winners. They can teach you that you should buy a ticket (which you already know has 99+% probability of not winning), plus a few irrelevant things they did which didn’t have any actual impact on winning.
The mixed model “your life is partially determined by your actions, and partially by forces beyond your control” is more tricky. On one hand, it makes sense to focus on the part that you can change, because that’s where your effort will actually improve things. On the other hand, it is hard to say whether people who have better outcomes than you, have achieved it by superior strategy or superior luck.
Naively, a combination of superior strategy and superior luck should bring the best outcomes, and you should still learn the superior strategy from the winners, but you should not expect to get the same returns. Like, if someone wins a lottery, and then lives frugally and puts all their savings in index funds, they will end up pretty rich. (More rich than people who won the lottery and than wasted the money.) It makes sense to live frugally and put your savings in index funds, even if you didn’t win the lottery. You should expect to end up rich, although not as rich as the person who won the lottery first. So, on one hand, follow the advice of the “winners at life”, but on the other hand, don’t blame yourself (or others) for not getting the same results; with average luck you should expect some reversion to the mean.
But sometimes the strategy and luck are not independent. The person with superior luck wins the lottery, but the person with superior strategy who optimizes for the expected return would never buy the ticket! Generally, the person with superior luck can win at life because of doing risky actions (and getting lucky) that the person with superior strategy would avoid in favor of doing something more conservative.
So the steelman of the objection in the mixed model would be something like: “Your specific outcome seems to involve a lot of luck, which makes it difficult to predict what would be the outcome of someone using the same strategy with average luck. I would rather learn strategy from successful people who had average luck.”
A toy model to illustrate my intuition about the relationship between strategy and luck:
Imagine that there are four switches called A, B, C, D, and you can put each of them into position “on” or “off”. After you are done, a switch A, B, C, D in a position “on” gives you +1 point with probability 20%, 40%, 60%, 80% respectively, and gives you −1 point with probability 80%, 60%, 40%, 20% respectively. A switch in a position “off” always gives you 0 points. (The points are proportional to utility.)
Also, let’s assume that most people in this universe are risk-averse, and only set D to “on” and the remaining three switches to “off”.
What happens in this universe?
The entire genre of “let’s find the most successful people and analyze their strategy” will insist that the right strategy is to turn all four switches to “on”. Indeed, there is no other way to score +4 points.
The self-help genre is right about turning on the switch C. But also wrong about the switches A and B. Neither the conservative people nor the contrarians get the answer right.
The optimal strategy—setting A and B to “off”, C and D to “on” -- provides an expected result +0.8 points. The traditional D-only strategy provides an expected result +0.6 points, which is not too different. On the other hand, the optimal strategy makes it impossible to get the best outcome; with best luck you score +2 points, which is quite different from the +4 points advertised by the self-help genre. This means the optimal strategy will probably fail to impress the conservative people, and the contrarians will just laugh at it.
It will probably be quite difficult to distinguish between switches B and C. If most people you know personally set both of them “off”, and the people you know from self-help literature set both of them “on” and got lucky at both, you have few data points to compare; the difference betwen 40% and 60% may not be large enough to empirically determine that one of them is a net harm and the other is a net benefit.
(Of course, whatever are your beliefs, it is possible to build a model where acting on your beliefs is optimal, so this doesn’t prove much. It just illustrates why I believe that it is possible to achieve outcomes better than usual, and also that it is a bad idea to follow the people with extremely good outcomes, even if they are right about some of the things most people are wrong about. I believe that in reality, the impact of your actions is much greater than in this toy model, but the same caveats still apply.)
In reality it has to be a mixture right? So many parts of my day are absolutely in my control, at least small things for sure. Then there are obviously a ton of things that are 100% out of my control. I guess the goal is to figure out how to navigate the two and find some sort of serenity. After all isn’t that the old saying about serenity? I often think about what you have said as an addict. I personally don’t believe addiction to be a disease, my DOC is alcohol, and I don’t buy into the disease model of addiction. I think it is a choice and maybe a disorder of the brain and semantics on the word “disease”. But I can’t imagine walking into a cancer ward full of children and saying me too! People don’t just get to quit cancer cold turkey. I also understand like you’ve pointed out, and I reaffirmed that it is both. I have a predisposition to alcoholism because of genetics and it’s also something I am aware of and a choice. I thought I’d respond to your post since you were so kind as to reply to my stuff. I find this forum very interesting and I am not nearly as intelligent as most here but man it’s fun to bounce ideas!
Yeah, this is usually the right answer. Which of course invites additional questions, like which part is which...
With addiction, I also think it is a mixture of things. For example, trivially, no one would abuse X if X were literally impossible to buy, duh. But even before “impossible”, there is a question of “how convenient”. If they sell alcohol in the same shop you visit every day to buy fresh bread, it is more tempting than if you had to visit a different shop, simply because you get reminded regularly about the possibility.
For me, it is sweet things. I eat tons of sugar, despite knowing it’s not good for my health. But fuck, I walk around that stuff every time I go shopping, and even if I previously didn’t think about it, now I do. And then… well, I am often pretty low on willpower. I wish I had some kind of augmented reality glasses which would simply censor the things in the shop I decide I want to live without. Like I would see the bread, butter, white yoghurt, and some shapeless black blobs between that. Would be so easier. (Kind of like an ad-blocker for offline world. This may become popular in the future.)
Another thing that contributes to addiction is frustration and boredom. If I am busy doing something interesting, I forget the rest of the world, including my bad habits. But if the day sucks, the need to get “at least something pleasant, now” becomes much stronger.
Then it is about how my home is arranged and what habits I create. Things that are “under my control in long term”, like you don’t build the good habit overnight, but you can start building it today. For example, with a former girlfriend I had a deal that there is one cabinet that I will never open, and she needs to keep all her sweets there; never leave them exposed on the table, so that I would not be tempted.
I was thinking about which possible parts of economy are effectively destroyed in our society by having an income tax (as an analogy to Paul Graham’s article saying that wealth tax would effectively destroy startups; previous shortform). And I think I have an answer; but I would like an economist to verify it.
Where I live, the marginal income tax is about 50%. Well, only a part of it is literally called “tax”, the other parts are called health insurance and social insurance… which in my opinion is misleading, because it’s not like the extra coin of income increases your health or unemployment risk proportionally; it should be called health tax and social tax instead… anyway, 50% is the “fraction of your extra coin the state will automatically take away from you” which is what matters for your economical decisions about making that extra coin.
In theory, by the law of comparative advantage, whenever you are better at something than your neighbor, you should be able to arrange a trade profitable for both sides. (Ignoring the transaction costs.) But if your marginal income is taxed at 50%, such trade would be profitable only if you are more than 2× better than your neighbor. And that still ignores the fixed costs (you need to study the law, do some things to comply with it, study the tax consequences, fill the tax report or pay someone to do it for you, etc.), which are significant if you trade in small amounts, so in practice you sometimes need to be even 3× or 4× better than your neighbor to make a profit.
This means that the missing part of economy are all those people who are better at something than their neigbors, but not 2×, 3×, or 4× better; at least not reliably. In an alternative tax system without income tax, they could engage in profitable trade with their neighbors; in our system, they don’t. And “being slightly better, but not an order of magnitude better at something” probably describes a majority of population, which suggests there is a huge amount of possible value that is not being created, because of the income tax.
Even worse, this “either you are an order of magnitude better, or go away” system creates barriers to entry in many places in the society. Unqualified vs qualified workers. Employees vs entrepreneurs. Whenever there is a jump required (large upfront investment for uncertain gain), fewer people cross the line than if they could walk across it incrementally: learn a bit, gain an extra coin, learn another bit, gain two extra coins… gradually approaching the limit of your abilities, and getting an extra income along the way to cover the costs of learning. The current system is demotivating for people who are not confident they could make the jump successfully. And it contributes to social unfairness, because some people can easily afford to risk a large upfront investment for uncertain gain, some would be ruined by a possible failure, and some don’t even have the resources necessary to try.
To reverse this picture, I imagine that in a society without income tax, many people would have multiple sources of income: they could have a job (full-time or part-time) and make some extra money helping their neighbors. The transition from an employer to an entrepreneur would be gradual, many would try it even if they don’t feel confident about going the entire way, because going halfway would already be worth it. And because more people would try, more would succeed; also, some of them would not have the skills to go the entire way at the beginning, but would slowly develop them along the way. Being an entrepreneur would not be stressful the same way it is now, and this society would have a lot of small entrepreneurs.
...and this kind of “bottom-up” economy feels healthier to me than the “top-down” economy, where your best shot at success is creating a startup for the purpose of selling it to a bigger fish. I suppose the big fish, such as Paul Graham, would disagree, but that’s the entire point: in a world without barriers to entry, you wouldn’t need to write motivational speeches for people to try their luck, they could advance naturally, following their incentives.
I think this is insightful, but my guess is that a society without income tax would not in fact be nearly as much better at providing opportunities for people who are kinda-OK-ish at things as you conjecture, and I further guess that more people than you think are at least 2x better at something than someone they can trade with, and furthermore (though it doesn’t make much difference to the argument here) I think something’s fundamentally iffy about this whole model of when people are able to find work.
Second point first. For there to be opportunities for you to make money by working, in a world with 50% marginal income tax, what you need is to be able to find someone you’re 2x better than at something, and then offer to do that thing for them.
… Actually, wait, isn’t the actual situation nicer than that? Roll back the income tax for a moment. You can trade profitably with someone else provided your abilities are not exactly proportional to one another, and that’s the whole point of “comparative advantage”. If you’re 2x worse at doing X than I am and 3x worse at doing Y, then there are profitable trades where you do some X for me and I do some Y for you. (Say it takes me one day to make either a widget or a wadget, and it takes you two days to make a widget and three days to make a wadget, and both of us need both widgets and wadgets. If we each do our own thing, then maybe I alternate between making widgets and wadgets, and get one of each every 2 days, and you do likewise and get one of each every 5 days. Now suppose that you only make widgets, making one every 2 days, and you give 3⁄5 of them to me so that on average you get one of your own widgets every 5 days, same as before. I am now getting 0.6 widgets from you every 2 days without having to do any work for them. Now every 2 days I spend 0.4 days making widgets, so I now have a total of one widget per 2 days, same as before. I spend another 1 day making one wadget for myself, so I now have a total of one wadget per 2 days, same as before; and another 0.2 days making one wadget for you, so you have one wadget per 5 days, same as before. At this point we are exactly where we were before, except that I have 10% of my time free, which I can use to make some widgets and/or wadgets for us both, leaving us both better off.
I haven’t thought it through but I guess the actual condition under which you can work profitably if there’s 50% income tax might be “there’s someone else, and two things you can both do, such that [(your skill at A) / (your skill at B)] / [(their skill at A) / (their skill at B)] is at least 2”, whereas without the tax the only requirement is that the ratio be bigger than 1.
Anyway, that’s a digression and I don’t think it matters that much for present purposes. (If what you want is not merely to “earn a nonzero amount” but to “earn enough to be useful”, then probably you do need something more like absolute advantage rather than merely comparative advantage.) The point is that what you need is a certain kind of skill disparity between you and someone else, and the income tax means that the disparity needs to be bigger for there to be an employment opportunity.
But if you’re any good at anything, and if not everyone else is really good at everything—or, considering comparative advantage again, if you’re any good at anything relative to your other abilities, and not everyone else is too, then there’s an opportunity. And it seems to me that if you have learned any skill at all, and I haven’t specifically learned that same skill, then almost certainly you’ve got at least a 2x comparative advantage there. (If you haven’t learned any skills at all and are equally terrible at everything, and I have learned some skills, then you have a comparative advantage doing something I haven’t learned. But, again, that’s probably not going to be enough to earn you enough to be any use.)
OK, so that was my second point: surely 2x advantages are commonplace even for not-very-skilled workers. Only a literally unskilled worker is likely to be unable to find anything they can do 2x better than someone.
Moving on to my (related) first point, let’s suppose that there are some people who have only tiny advantages over anyone else. In principle, they’re screwed in a world with income tax, and doing fine in a world without, because in the latter they can find someone they’re a bit better at and work for them. But in practice I’m pretty sure that almost everyone who is doing work that isn’t literally unskilled is (perhaps only by virtue of on-the-job training) doing it well more than 2x better than someone completely untrained, and I suspect that actually finding and exploiting “1.5x” opportunities would be pretty difficult. If someone’s barely better than completely-unskilled, it’s probably hard to tell that they’re not completely unskilled, so how do they ever get the job, even in a world without income tax.
Finally, the third point. A few times above I’ve referred to “literally unskilled” workers. In point of fact, I think there are literally unskilled workers. That ought to be impossible even in a world without income tax. What’s going on? Answer: work isn’t only about comparative or absolute advantage in skills. Suppose I am rich and I need two things done; one is fun and one is boring. I happen to be very good at both tasks. But I don’t wanna do the boring one. So instead I pay you (alas, you are poor) to do the boring task. Not because of any relevant difference in skill, but just because we value money differently because I’m rich and you’re poor, and you’re willing to do the boring job for a modest amount of money and I’m not. Everybody wins. Or suppose there’s no difference in wealth or skill between us, and we both need to do two things 100x each. Either of us will do better if we pick one thing and stick with it so we don’t incur switching costs and get maximal gains from practice. So you do Thing One for me and I do Thing Two for you. I think income taxes still produce the same sort of friction, and require the advantages (how much more willing you are to do boring work than me on account of being poor, how much we gain from getting more practice and avoiding switching costs) to be larger roughly in inverse proportion to how much of your income isn’t taxed, so this point is merely a quibble that doesn’t make much difference to your argument.
Thinking about relation between enlightenment and (cessation of) signaling.
I know that enlightenment is supposed to be about cessation of all kinds of cravings and attachments, but if we assume that signaling is a huge force in human thinking, then cessation of signaling is a huge part of enlightenment.
Some random thoughts in that direction:
The paradoxical role of motivation in enlightenment—enlightenment is awesome, but a desire to be awesome is the opposite of enlightenment.
Abusiveness of the Zen masters towards their students: typically, the master tries to explain the nature of enlightenment using an unhelpful metaphor (I suppose, because most masters suck at explaining). Immediately, a student does something obviously meant to impress the master. The master goes berserk. Sometimes, as a consequence, the student achieves enlightenment. -- My interpretation is that realizing (System 1) that the master is an abusive asshole who actually sucks at teaching, removes the desire to impress him; and because in this social setting the master was perceived as the only person worth impressing, this removes (at least temporarily) the desire to impress people in general.
A few koans are of the form: “a person A does X, a person B does X, the master says: A did the right thing, but B did the wrong thing”—the surface reading is that the first person reacted spontaneously, and the second person just (correctly) realized that X will probably be rewarded and tried to copy the motions. A more Straussian reading is that this story is supposed to confirm to the savvy reader that masters really don’t have any coherent criteria and their approval is pointless.
(There are more Straussian koans I can’t find right now, where a master says “to achieve enlightenment, you must know at least one thousand koans” and someone says “but Bodhidharma himself barely knew three hundred” and the master says “honestly I don’t give a fuck”… well, using more polite words, but the impression is that the certification of enlightenment is completely arbitrary and maybe you just shouldn’t care about being certified.)
Quite straightforward in Nansen’s cat—the students try to signal their caring and also their cleverness, and thus (quite predictably) fail to actually save the cat. (Joshu’s reaction to hearing this is probably an equivalent of facepalm.)
Stopping the internal speech in meditation—internal speech is practicing of talking to others, which is mostly done to signal something. The first step towards cessasion of signalling is to try spending 20 minutes without (practicing) signalling, which is already a difficult task for most people.
Meditation skills reducing suffering from pain—this gives me the scary idea that maybe we unconsciously increase our perception of pain, in order to better signal our pain. From a crude behaviorist perspective, if people keep rewarding your expression of pain (by their compassion and support), they condition you to express more pain; and because people are good at detecting fake emotions, the most reliable way to express more pain is to actually feel more pain. The scary conclusion is that a compassionate environment can actually make your life more painful… and the good news is that if you learn to give up signaling, this effect can be reversed.
Out of curiosity (about constructivism) I started reading Jean Piaget’s Language and Thought of the Child. I am still at the beginning, so this comment is mostly meta:
It is interesting (kinda obvious in hindsight), how different a person sounds when you read a book written by them, compared to reading a book about them. This distortion by textbooks seems to happen in a predictable direction:
People sound more dogmatic than they really were, because in their books there is enough space for disclaimers, expressing uncertainty, suggesting alternative explanations, providing examples of a different kind, etc.; but a textbook will summarize this all as “X said that Y is Z”.
People sound less empirical and more like armchair theorists, because in their books there is enough space to describe various experience and experiments that led them to their conclusions, but the textbook will often just list the conclusions.
People sound more abstract and boring, because the interesting parts get left out in the textbooks, replaced by short abstract definitions.
(I guess the lesson is that if you learn about someone from a textbook and conclude “this guy is just another boring dogmatic armchair theorist”, you should consider the possibility that this is simply what textbooks do to people they describe, and try reading their most famous book to give them a chance.)
So my plan was to find out how exactly did Piaget mean his abstract conclusion that kids “construct” models of reality in their heads… and instead here is this experiment how two researchers observed two 6-years old boys at elementary school for one month and wrote down every single thing they said (plus the context), and then made a statistic of how often when one kid says something to another, there is no response, and it is okay because no response was really expected, because small kids are mostly talking to themselves even when they address other people… and I am laughing because I just returned from playground with my kids, and this is so true for the 3-years old. -- More disturbingly, then I start thinking about whether blogging, or even me writing this specific comment now, is really fundamentally different. Piaget classifies speech acts primarily by whether you expect or don’t expect a response; but with blogging, you always may get a response, or you may get silence, and you will only find out much later.
a large number of people, whether from the working classes or the more absent-minded of the intelligentsia, are in the habit of talking to themselves, of keeping up an audible soliloquy. This phenomenon points perhaps to a preparation for social language. The solitary talker invokes imaginary listeners, just as the child invokes imaginary playfellows.
I started reading as a research, now I read because it is fun.
Good thing to do with a 6 yo: type stories they dictate. At that age, the physical act of writing is a bottleneck, so this can release a torrent of imagination. They love seeing their stories printed, like “real” stories they read in books. And it’s a fun thing to do together.
When I do this, I don’t edit what he writes. It’s his story. I’m just transcribing it. So the most I’ll do, if he mangles a sentence so badly that it’s not even something he’d say, is repeat the first half out loud as I’m typing, and let him finish it again.
My kids are familiar with recording sound on Windows. They already record their songs or poems. For some reason, they don’t like the idea of recording a story, even if I offer to transcribe it afterwards.
Perhaps transcribing in real time would be more fun...
To understand qualia better, I think it would help to get a new sensory input. Get some device, for example a compas or an infrared camera, and connect it to your brain. After some time, the brain should adapt and you should be able to “feel” the inputs from the device.
Congratulations! Now you have some new qualia that you didn’t have before. What does it feel like? Does this experience feel like a sufficient explanation to say that the other qualia you have are just like this, only acquired when you were a baby?
After reading the Progress & Povertyreview at ACX, it seems to me that land is the original Bitcoin. Find a city that has a future, buy some land, and HODL.
If you can rent the land (the land itself, not the structures that stand on it), you even have a passive income that automatically increases over time… forever. This makes it even better than Bitcoin.
So, the obvious question is why so many people are angry about the Bitcoin, but so few (only the Georgists, it seems) are angry about the land.
EDIT: A possible explanation is that land is ancient and associated with high status, Bitcoin is new and low-status. Therefore problems associated with Bitcoin can be criticized openly, while problems associated with land are treated as inevitable.
While I think much of the anger about Bitcoin is caused by status considerations, other reasons to be more upset about Bitcoin than land rents include:
Land also has use-value, Bitcoin doesn’t
Bitcoin has huge negative externalities (environmental/energy, price of GPUs, enabling ransomware, etc.)
Bitcoin has a different set of tradeoffs to trad financial systems; the profusion of scams, grifts, ponzi schemes, money laundering, etc. is actually pretty bad; and if you don’t value Bitcoin’s advantages...
Full-Georgist ‘land’ taxes disincentivise searching for superior uses (IMO still better than most current taxes, worse than Pigou-style taxes on negative externalities)
Oh, that’s an interesting point: in Georgist system, if you invent a better use of your land, the rational thing to do is shut up, because making it known would increase your tax!
I wonder what would happen in an imperfectly Georgist system, with a 50% or 90% land value tax. Someone smarter than me probably already thought about it.
Also, people can brainstorm about the better use of their neighbor’s land. No one would probably spend money to find out whether there is oil under your house. But cheap ideas like “your house seems like a perfect location to build a restaurant” would happen.
Maybe in Georgist societies people would build huge fences around their land, to discourage neighbors from even thinking about it.
When you tell people which food contains given vitamins, also tell them how much of the food would they need to eat in order to get their recommended daily intake of given vitamin from that source.
As an example, instead of “vitamin D can be found in cod liver oil, or eggs” tell people “to get your recommended intake of vitamin D, you should eat every day 1 teaspoon of cod liver oil, or 10 eggs”.
The reason is that without providing quantitative information, people may think “well, vitamin X is found in Y, and I eat Y regularly, so I got this covered”, while in fact they may be eating only 1⁄10 or 1⁄100 of the recommended daily intake. When you mention quantities, it is easier for them to realize that they don’t eat e.g. half kilogram of spinach each day on average (therefore, even eating spinach quite regularly doesn’t mean you got your iron intake covered).
The quantitative information is typically provided in micrograms or international units, which of course is something that System 1 doesn’t understand. To get an actionable answer, you need to make a calculation like “an average egg has 60 grams of yolk… a gram of cooked egg yolk contains 0.7 IU of vitamin D… the recommended daily intake of vitamin D for an adult is 400 or 600 IU depending on the country… that means, 9-14 eggs a day, assuming I only get the vitamin D from eggs”. I can’t make the calculation in my head, because there is no way I would remember all these numbers, plus the numbers for other vitamins and minerals. But with some luck, I could remember “1 teaspoon of cod liver oil, or 10 eggs, for vitamin D”.
Obvious problem: the recommended daily intake differs by country, eggs come in different sizes, and probably contain different amounts of vitamin D per gram. Which is why giving the answer in eggs will feel irresponsible, and low status (you are exposing yourself to all kinds of nitpicking). Yes; true. But ultimately, the eggs (or whatever is the vegan equivalent of food) are what people actually eat.
commended daily intake of vitamin D for an adult is 400 or 600 IU depending on the country
This assumes that the RDA that those organization publish are trustworthy. You have other organization like the Encodrine society that recommend an order of magnitude more vitamin D.
If the RDA of 400 or 600 IU would be sensible you also could solve it by being a lot in the sun once every two weeks.
Have you tried using Cronometer or a similar nutrition-tracking service to quickly find these relationships? I’ve found Cronometer in particular to be useful because it displays each nutrient in terms of a percent of the recommended daily value for one’s body weight. For example, I can see that a piece of salmon equals over 100% of the recommended amount of omega-3 fatty acids for the day, while a handful of sunflower seeds only equals 20% of one’s daily value of vitamin E. Therefore, I know that a single piece of fish is probably enough, but that I should probably eat a larger portion of sunflower seeds than I would otherwise.
I suppose a percentage system like this one is just the reciprocal of saying something like “10 eggs contain the recommended daily amount of vitamin D.”
Thank you for the link! Glad to see someone uses the intuitive method. My complaint was about why this isn’t the standard approach. Like, recently I was reading a textbook on nutrition (the actual school textbook for cooks; I was curious what they learn), where the information was provided in the form of “X is found in A, B, C, D, also in E” without any indication how often are you supposed to eat any of these.
(If I said this outside of Less Wrong, I would expect the response to be: “more is better, of course, unless it is too much, of course; everything in moderation”, which sounds like an answer, but is not much.)
And with corona and the articles on vitamin D, I opened the Wikipedia, saw “cod liver” as the top result, thought it was no problem they sell it in the shop and it’s not expensive and it tastes okay, I just need to know how much, then I ran the numbers… and then I realized “shit, 99% of people will not do this, even if they get curious and read the Wikipedia page”. :(
I noticed recently that I almost miss the Culture War debates (on internet in general, nothing specific about Less Wrong). I remember that in the past they seemed to be everywhere. But in recent months, somehow...
I don’t use Twitter. I don’t really understand the user interface, and I have no intention to learn it, because it is like the most toxic website ever.
Therefore most Culture War content in English came to me in the past via Reddit. But they keep making the user interface worse and worse, so a site that was almost addictive in the past, is so unpleasant to use now, that it actually conditions me to avoid it.
Slate Star Codex has no new content. Yeah, there are “slatestarcodex” and “motte” debates on Reddit, but… I already mentioned Reddit.
Almost all newspaper articles in my native language are paywalled these days. No, I am not going to pay for your clickbait.
So… I am vaguelly aware that Trump was an American president and now it is Biden (or is it still Trump, and Biden will be later? dunno), and there were (still are?) BLM protests in USA. And in my country, the largest political party recently split in two, and I don’t even know the name of the new one, and I don’t even care because what’s the point, the next election is in 3 years. Other than this… blissful ignorance.
And I am not asking you to fix my ignorace—neither do I try to protect it; I just don’t want to invite political content to LW—just commenting on how weird this feels. And I didn’t even notice how this happened, only recently my wife asked me “so what is the latest political controversy you read about online”, and it was a shock to realize that I actually have no idea.
OK, here is the question: is this just about my bubble, or is it a global consequence of COVID-19 taking away attention from corona-unrelated topics?
This is your bubble, because in the relevant spaces they have largely incorporated COVID into the standard fighting and everything, not turned down the fighting at all. I think your bubble sounds great in lots of ways, and am glad to hear you have space from it all.
I guess in my ontology these new debates simply do not register as proper Culture Wars.
I mean, the archetypal Culture Was is a conflict of values (“we should do X”, “no, we should do Y”) where I typically care to some degree about both, so it is a question of trade-offs; combined with different models of the world (“if we do A, B will happen”, “no, C will happen”); about topics that are already discussed in some form for a few decades or centuries, and that concern many people. Or something like that; not sure I can pinpoint it. It’s like, it must feel like a grand philosophical topic, not just some technical question.
Compared with that, with COVID-19 we get the “it’s just a flu” opinion, which for me is like anti-vaxers (whom I also don’t consider a proper Culture War). To some degree it is interesting to steelman it, like to question when people die having ten serious health problems at the same time, how do we choose the official reason of death; or if we just look at total deaths, how to distinguish the second-order effects, such as more depressed people committing suicides, but also fewer traffic deaths… but at the end of the day, you either assume a worldwide conspiracy of doctors that keep healthy people needlessly attached to ventilators, or you admit it’s not just a flu. (Or you could believe that the ventilators are just a hoax promoted by government.) At the moment when even Putin’s regime officially admitted it is not a flu, I no longer see any reason to pay attention to this opinion.
Then we have this “lockdown” vs whatever is the current euphemism for just letting people die, which at least is the proper value conflict. And maybe this is about my privilege… that when people have to decide whether they’d rather lose their jobs or lose their parents, I am not that emotionally involved, because I think there is a high chance I can keep both regardless of what the nation decides to do collectively: I can work remotely; and my family voluntarily socially isolates… I am such a lucky selfish bastard, and apparently, so is my entire bubble. I mean, if you ask me, I am on the side of not letting people die, even if it means lower profits for one year. But then I hear those people complaining about how inconvenient it is to wear face masks, and how they just need to organize huge weddings, go to restaurants and cinemas and football matches… and then I realize that no one cares about my opinion how to survive best, because apparantly no one cares about surviving itself.
What else? There was this debate about whether Sweden is this magical country that doesn’t do anything about COVID-19 and yet COVID-19 avoids it completely, but recently I don’t even hear about them anymore. Maybe they all died, who knows.
Lucky bubble. Or maybe Facebook finally fixing their algorithm so that it only shows me what I want to see.
Compared with that, with COVID-19 we get the “it’s just a flu” opinion, which for me is like anti-vaxers (whom I also don’t consider a proper Culture War).
My sense is “it’s just a flu” is a conflict of values; there are people for whom regular influenza is cause for alarm and perhaps changing policies (about a year ago, I had proposed to friends the thought experiment of an annual quarantine week, wondering whether it would actually reduce the steady-state level of disease or if I was confused about how that dynamical system worked), and there are people who think that cowardice is unbecoming and illness is an unavoidable part of life. That is, some think the returns to additional worry and effort are positive; others think they are negative.
you either assume a worldwide conspiracy of doctors that keep healthy people needlessly attached to ventilators, or you admit it’s not just a flu.
Often people describe medications as “safer than aspirin”, but this is sort of silly because aspirin is one of the more dangerous medications people commonly take, grandfathered in by being discovered early. In a normal year, influenza is responsible for over half of deaths due to infectious disease in the US; the introduction of a second flu would still be a public health tragedy, from my perspective.
(Most people, I think, are operating off the case fatality rate instead of the mortality per 100k; in 2018, influenza killed about 2.5X as many people as AIDS in the US, but people are much more worried about AIDS than the flu, and for good reason.)
But they keep making the user interface worse and worse, so a site that was almost addictive in the past, is so unpleasant to use now, that it actually conditions me to avoid it.
If—if there were a way to use the old Reddit UI, would you want to know about it?
Gur byq.erqqvg.pbz fhoqbznva yrgf lbh hfr gur byq vagresnpr.
Thank you; yes, I already know about it. But the fact that I have to remember, and keep switching when I click on a link found somewhere, is annoying enough already. (It would be less anoying with a browser plugin that does it automatically for me, and I am aware such plugins exist, but I try to keep my browser plugins at minimum.) So, at the end of the day, I am aware that a solution exists, and I am still annoyed that I would need to do take action to achieve something that used to be the default option. Also, this alternative will probably be removed at some point in the future, so I would just be delaying the inevitable.
When autism was low-status, all you could read was how autism is having a “male brain” and how most autists were males. The dominant paradigm was how autists lack the theory of mind… which nicely matched the stereotype of insensitive and inattentive men.
Now that Twitter culture made autism cool, suddenly there are lots of articles and videos about “overlooked autistic traits in women” (which to me often seem quite the same as the usual autistic traits in men). And the dominant paradigm is how autistic people are actually too sensitive and easily overwhelmed… which nicely matches the stereotype of sensitive women.
For example: difficulty in romantic relationships, difficulty understanding things because you interpret other people’s speech literally, anxiety from pretending to be something you are not, suppressing your feelings to make other people comfortable, changing your language and body language to mirror others, being labeled “sensitive” or “gifted”, feeling depleted after social events, stimming, being more comfortable in writing than in person, sometimes taking a leadership role because it is easier than being a member of the herd, good at gaslighting yourself, rich inner speech you have trouble articulating, hanging out with people of the opposite sex because you don’t do things stereotypical for your gender, excelling at school, awkward at flirting—haha, nope, definitely couldn’t happen to someone like me. /s
(The only point in that video that did not apply symmetrically was: female special interests are usually more socially acceptable than male special interests. It sounds even more convincing when the author puts computer programming in the list of female special interests, so the male special interests are reduced to… trains.)
I suppose the lesson is that if you want to get some empathy for a group of people, you first need to convince the audience that the group consists of women, or at least that there are many women in that group who deserve special attention. Until that happens, anyone can “explain” the group by saying basically: “they are stupid, duh”.
I mean, I was denied a diagnosis for ‘having empathy’ as a young child, and granted a diagnosis as an older child the next decade because that was determined to be an inaccurate criteria, I do believe before Twitter was founded and certainly before its culture.
Elsevier found a new method to extract money! If you send an article to their journal from a non-English-speaking country, it will be rejected because of your supposed mistakes in English language. To overcome this obstacle, you can use Elsevier’s “Language Editing services” starting from $95. Only afterwards will the article be sent to the reviewers (and possibly rejected).
This happens also if you had your article already checked by a native English speaker who found no errors. On the other hand, if you let your co-author living in an English-speaking country submit the article, the grammar will always be okay.
Based on anecdotal evidence from a few scientists I know. Though some of them have similar experience with other journals who do not use their own language services, so maybe this is not about money but about being primed to check for “bad English” of authors from non-English-speaking countries.
The easiest way to stop consuming some kind of food is simply to never buy it. If you don’t have it at home, you are not tempted to eat it.
(You still need the willpower at the shop—but how much time do you spend at the shop, compared to the time spent at home?)
But sometimes you do not live alone, and even if you want to stop eating something, other people sharing the same kitchen may not share your preferences.
I found out that asking them to cover the food by a kitchen towel works surprisingly well for me. If I don’t see it, the temptation is gone. Even if I perfectly know what is under the towel. Heck, even if I look under the towel and then put it back.
Of course, what works for me may not work for you. But if feels important for me to figure out that I am most vulnerable to visual temptations.
Now I should spend some time thinking how else could I use this knowledge now. What are the visual temptations in my environment, that could be made much weaker simply by covering them (even if I still know what is there)? Heh, maybe I should put a curtain over the kitchen door. (Or rather, always bring a bottle of water with me, because drinking is my most frequent reason to enter the kitchen.) Remove various kinds of autocomplete from web browser...
tl;dr—The surprising part is not that “out of sight, out of mind” works, but that it works even if I merely cover the tempting thing with a towel, despite knowing perfectly well what is under that towel. The trigger for temptation is the sight, not the knowledge.
I noticed that some people use “skeptical” to mean “my armchair reasoning is better than all expert knowledge and research, especially if I am completely unfamiliar with it”.
Example (not a real one): “I am skeptical about the idea that objects would actually change their length when their speed approaches the speed of light.”
The advantage of this usage is that it allows you to dismiss all expertise you don’t agree with, while making you sound a bit like an expert.
I suspect you’re reacting to the actual beliefs (disbelief in your example), rather than the word usage. In common parlance, “skeptical” means “assign low probability”, and that usage is completely normal and understandable.
The ability to dismiss expertise you don’t like is built into humans, not a feature of the word “skeptical”. You could easily replace “I am skeptical” with “I don’t believe” or “I don’t think it’s likely” or just “it’s not really true”.
I think that “skeptical” works better as a status move. If I say I don’t believe you, that makes us two equals who disagree. If I say I am skeptical… I kinda imply that you are not. Similarly, a third party now has the options to either join the skeptical or the non-skeptical side of the debate.
(Or maybe I’m just overthinking things, of course.)
Today I learned that our friends at RationalWiki dislike effective altruism, to put it mildly. As David Gerard himself says, “it is neither altruistic, nor effective”.
In section Where “Effective Altruists” actually send their money, the main complaint seems to be that among (I assume) respectable causes such as fighting diseases and giving money to poor people, effective altruists also support x-risk organisations, veganism, and meta organisations… or, using the language of RationalWiki, “sending money to Eliezer Yudkowsky”, “feeling bad when people eat hamburgers”, and “complaining when people try to solve local problems”.
Briefly looking at numbers of donors in the surveys and trying to group the charities into categories (chances are I misclassified something), it seems like disease charities got 211+114+43+16=384, poverty charities 101, Yudkowsky charities 77+45=122, meta charities 46+21+14+10+10=101, animal charities 27+22=49, and Leverage 7 donors. So even if you think that only disease charities and poverty charities are truly altruistic, it would still be 63% of donors giving money to truly altruistic charities. Uhm, could be worse, I guess.
Also, this is a weird complaint:
GiveWell has also recommended that people spam the Against Malaria Foundation (AMF) with all (except if they are billionaires, obviously) the money they have set aside to donate, on the grounds that they think it’s the best charity, even at the risk of exhausting the AMF’s room for more funding, amongst other dubious decisions.
Like, without any evidence that AMF’s room for funding was actually exhausted, this all reduces to: “we hate EAs because they do not send money to best charities, and also because they send them more money than they can handle”. But sneering was never supposed to be consistent, I guess.
There are over 100 edits in this article. Many, especially of the large ones are made by David Gerard, but there is also Greenrd and others.
It would be nice to have better tools for exploring wiki history, for example, if I could select a sentence or two, and get a history of this specific sentence, like only the edits that modified it, and preferably get all the historical versions of that sentence on a single page along with the user names and links to edits, so that I do not need to click on each edit separately and look for the sentence.
It is also interesting to compare Wikipedia and RationalWiki articles on the same topic.
Wikipedia narrative is that EA is a high-status “philosophical and social movement” responsible for over $400 000 000 donations in 2019, based on principles of “impartiality, cause neutrality, cost-effectiveness, and counterfactual reasoning”, and its prominent causes are “global poverty, animal welfare, and risks to the survival of humanity over the long-term future”.
Rationalist community is mentioned briefly:
A related group that attracts some effective altruists is the rationalist community.
In addition, the Machine Intelligence Research Institute is focused on the more narrow mission of managing advanced artificial intelligence.
Other contributions were [...] the creation of internet forums such as LessWrong.
Furthermore, Machine Intelligence Research Institute is included in the “Effective Altruism” infobox at the bottom of the page. Mention of Eliezer Yudkowsky was removed as not properly sourced (fair point, I guess). The Wikiquote page on EA quotes Scott Alexander and Eliezer Yudkowsky.
RationalWiki narrative is that “The philosophical underpinnings mostly come from philosopher Peter Singer [but] This did not start the effective altruism subculture”. “The effective altruism subculture — as opposed to the concept of altruism that is effective — originated around LessWrong” “The ideas have been around a while, but the current subculture that calls itself Effective Altruism got a big push from MIRI and its friends in the LessWrong community”, but the problem is that rationalists believed that MIRI is an effective charity, which is a form of Pascal’s Mugging.
“effective altruists currently tend to think that the most important causes to focus on are global poverty, factory farming, and the long-term future of life on Earth. In practice, this amounts to complaining when people try to solve local problems, feeling bad when people eat hamburgers, and sending money to Eliezer Yudkowsky, respectively.”
...so, my impression is that according to Wikipedia, EA is high-status and mostly unrelated to the rationalist community; and according to RationalWiki, EA was effectively started by rationalist community and is low-status.
1) There was this famous marshmallow experiment, where the kids had an option to eat one marshmallow (physically present on the table) right now, or two of them later, if they waited for 15 minutes. The scientists found out that the kids who waited for the two marshmallows were later more successful in life. The standard conclusion was that if you want to live well, you should learn some strategy to delay gratification.
(A less known result is that the optimal strategy to get two marshmallows was to stop thinking about marshmallows at all. Kids who focused on how awesome it would be to get two marshmallows after resisting the temptation, were less successful at actually resisting the temptation compared to the kids who distracted themselves in order to forget about the marshmallows—the one that was there and the hypothetical two in the future—completely, e.g. they just closed their eyes and took a nap. Ironically, when someone gives you a lecture about the marshmallow experiment, closing your eyes and taking a nap is almost certainly not what they want you to do.)
After the original experiment, some people challenged the naive interpretation. They pointed out that whether delaying gratification actually improves your life, depends on your environment. Specifically, if someone tells you that giving up a marshmallow now will let you have two in the future… how much should you trust their word? Maybe your experience is that after trusting someone and giving up the marshmallow in front of you, you later get… a reputation of being an easy mark. In such case, grabbing the marshmallow and ignoring the talk is the right move. -- And the correlation the scientists found? Yeah, sure, people who can delay gratification and happen to live in an environment that rewards such behavior, will suceed in life more than people who live in an environment that punishes trust and long-term thinking, duh.
Later experiments showed that when the experimenter establishes themselves as an untrustworthy person before the experiment, fewer kids resist taking the marshmallow. (Duh. But the point is that their previous lives outside the experiment have also shaped their expectations about trust.) The lesson is that our adaptation is more complex than was originally thought: the ability to delay gratification depends on the nature of the environment we find ourselves in. For reasons that make sense, from the evolutionary perspective.
2) Readers of Less Wrong often report having problems with procrastination. Also, many provide an example when they realized at young age, on a deep level, that adults are unreliable and institutions are incompetent.
I wonder if there might be a connection here. Something like: realizing the profound abyss between how our civilization is, and how it could be, is a superstimulus that switches your brain permanently into “we are doomed, eat all your marshmallows now” mode.
This seems likely to me, although I’m not sure “superstimulus” is the right word for this observation.
It certainly does make sense that people who are inclined to notice the general level of incompetence in our society, will be less inclined to trust it and rely on it for the future
You know why the fence was built. The original reason no longer applies, or maybe it was a completely stupid reason. Yes, you should tear down the stupid fence.
And yet, there is a worry… might the fact that you see this stupid fence be an anthropic evidence that in the Everett branches without this stupid fence you are already dead?
As with many anthropic considerations, there is a serious problem determining the reference class here. Generally an appropriate reference class is “somebody sufficiently like you”, and then compute weightings for some parameter that varies between universes and affects the number and/or probability of observers.
The trouble is that “sufficiently like you” is a uselessly vague specification. The most salient reference class seems to be “people considering removing a fence very much like this one”. But that’s no help at all! People in other universes who already removed their universe’s fence are excluded regardless of whether they lived or died.
Okay, what about “people who have sufficiently close similarity to my physical and mental make-up at (time now)”? That’s not much help either: almost all of them probably have nothing to do with the fence. Whether or not the fence is deadly will have negligible effect on the counts.
Maybe consider “people with my physical and mental make-up who considered removing this fence between (now minus one day) and (now), and are still alive”. At this point I consider that I am probably stretching a question to get a result I want. What’s more, it still doesn’t help much. Even comparing universes with p=0 of death to p=1, there’s at most a factor of 2 difference in counts for the median observer. Given such a loaded question, that’s a pretty weak update from an incredibly tiny prior.
I enjoyed reading a review of Sick Societies. Seems like it’s difficult to find the right balance between “primitive cultures are stupid” and “everything in primitive cultures is full of deep wisdom that we modern people are unable to understand”.
As usual, the public opinion moves as a pendulum; on the social level it goes from “we are 100% correct about everything, there is nothing to learn from others” to “everything would be better if we replaced our ways by the wisdom of others”.
In the rationalist community, I think we started at the position of thinking about everything explicitly, and we keep getting “post-rationalist” reminders of Chesterton’s fences, illegible wisdom hidden in traditions (especially Buddhism), et cetera. Which is good, in moderate doses. But it is also good to admit that sometimes things that seem stupid… are actually stupid. Not every seemingly stupid behavior contains a hidden wisdom; sometimes people are stupid and/or stuck in horrible Nash equilibria.
As usual, the frustrating answer is “it depends”. If we see something that doesn’t make sense to us, it is good to try figuring out whether there is a good reason we missed. But this doesn’t mean there always is a good reason. It doesn’t even mean (as Chesterton would implore us) that we can find out why exactly some tribe started doing this many years ago. Maybe they simply made a mistake! Or they had a mad leader who was good at killing those who opposed him, but his policy proposals were disastrous. They were just as fallible as we are; possibly much more.
Saying whether “something” “is” “stupid” is sort of confused. If I run algorithm X which produces concrete observable Y, and X is good and Y is bad, is Y stupid? When you say that Y is stupid, what are you referring to? Usually we don’t even want to refer to [Y, and Y alone, to the exclusion of anything Y is entangled with / dependent on / productive of / etc.].
I don’t have an exact definition, but approximately it is a behavior that is justified by false beliefs, and if only one person did it, we would laugh at them, and the person would only be hurting themselves… but if many people start doing it, and they add an extra rule that those who don’t do the thing or even argue against doing the thing must be punished, they can create a Nash equilibrium where people doing the thing hurt themselves, but people who refuse to do the thing get hurt more by their neighbors. And where people, if they were allowed to think about it freely, would reflexively not endorse being stuck in such equilibrium. (It’s mention in the article that often when the people learn that others do not live in the same equilibrium, they become deeply ashamed for their previous behavior. Which suggest that an important part of why they were doing it was because they did not realize that an alternative is possible—either it didn’t occur to them at all, or they believed incorrectly that for some reason it wouldn’t work.)
I cannot dismiss that possibility completely, but I assume that cultural inventions like scientific method and free speech are helpful—I mean, compared to living in a society that believes in horrible monsters and spirits everywhere, where interpersonal violence is the norm, and two-digit percent of males die by murder. In such society, if someone tells you “believe X, or else”, then it doesn’t matter how absurd X is, you will at least pretend that you take it seriously. (Or you die.) Even if it’s something obviously self-serving, like the leader if the tribe telling little boys that they need to suck his dick, otherwise they will not have enough male energy to grow up healthy.
These days, if you express doubts about the Emperor’s new clothes… you will likely survive. So the stupid ideas get some opposition. And I don’t know how much the Asch’s conformity experiment replicates, but it suggests that even a lonely dissent can do wonders.
There is a question whether human morality is actually improving over centuries in some meaningful sense, or whether it is just a random walk that feels like improving to us (because we evaluate other people using the metric of “how similar is their morality to ours” which of course gives a 100% score to us and less to anyone else).
I think that an important thing to point out here is that our models of the world improve in general. And although some moral statements are made instinctively, other moral statements are made in form of implications—“I instinctively feel X. X implies Y. Therefore, Y.”—and those implications can be factually wrong. Importantly, this is not moral realism. (Technically, it is an implied judgment that logically coherent systems of morality are better than logically incoherent ones.)
“The only thing that matters are paperclips”—I guess we can only agree to disagree.
“2+2=5, therefore the only thing that matters are paperclips”—nope, you are wrong.
From this perspective, a part of the moral progress can be explained by humans having better models of humans and the world in general. (And when someone says “a difference in values”, we should distinguish between “a difference in instincts” and “shitty reasoning”.)
I like to think that there is a selection process going on.
Over long time scales, cultures that satisfy their people’s needs better have—other things being equal—higher chances of continuing to exist.
Moral systems are, to a large degree, about people’s well-being—at least according to people’s beliefs at that time. And that is partly about having a good model of people’s needs.
Spartans, Mongols, Vikings, and many others beg to disagree.
I’m with Viliam that we have better models of morality. The Mongols would be quite disappointed by our weakness. And at least they ruled the biggest empire ever. But their culture got selected out of the memepool too.
I’m very grateful that we are alive despite having nukes and that people and culture at this time are less violent and more collaborative is for sure one reason for that.
Vikings might still disagree from their perspective.
The reason wealth taxes have such dramatic effects is that they’re applied over and over to the same money. Income tax happens every year, but only to that year’s income. Whereas if you live for 60 years after acquiring some asset, a wealth tax will tax that same asset 60 times. A wealth tax compounds.
But wait, isn’t income tax also applied over and over to the same money? I mean, it’s not if I keep the money for years, sure. But if I use it to buy something from another person, then it becomes the other person’s income, gets taxed again; then the other person uses the remainder to buy something from yet another person, where the money gets taxed again; etc.
Now of course there are many differences. The wealth tax is applied at constant speed—the income tax depends on how fast the money circulates. The wealth tax is paid by the same person over and over again—the income tax is distributed along the flow of the money.
Not sure what exactly is my thesis here. I just got a feeling that the income tax could actually have similar effect, except distributed throughout the society, which makes it more difficult to notice and describe.
Also, affecting different types of people: wealth tax hits hardest the people who accumulate large wealth in short time and then keep it for long time; income tax hits hardest the people who circulate the money fastest. Or maybe the greatest victims of income tax are invisible—some hypothetical people who would circulate money extremely fast in an alternate reality where even 1% income tax is frowned upon, but who don’t exist in our reality because the two-digit income tax would make this behavior clearly unprofitable.
Am I just imagining things here, or does this correspond to something economists already have a name for? I vaguely remember something about tax, inflation, and multipliers. But who are those fast-circulators our tax system hits hardest? Graham’s article isn’t merely about how money affects money, but how it affects motivation and human activity (wealth tax → startups less profitable → fewer startups). What motivation and human activity is similarly affected by the recursive applications of the income tax?
To avoid misunderstanding, I am not asking the usual question: how many kids we could feed by taxing the startups more. I am asking, what kind of possible economical activity is suppressed by having a tax system that is income-based rather than wealth-based? In the trade-off, where one option would destroy the startups, what exactly is being destroyed by having the opposite option?
I would very much like to see a society where money circulates very quickly. I expect people will have many reasons to be happier and suffer less than they do now.
As you observe, income taxes encourage slowing down circulation of money, while wealth taxes speed up circulation of money (and creation of value), but I think there are better ways of assessing tax than those two. I suspect heavily taxing luxury goods which serve no functional purpose, other than to signal wealth, is a good direction to shift taxes towards, although there may be better ways I haven’t thought of yet.
Not answering your question, just some thoughts based on your post
In the meanwhile I remembered reading long ago about some alternative currencies. (Paper money; this was long before crypto.) If I remember it correctly, the money was losing value over time, but you paid no income tax on it. (It was explained that exactly because the money lost value, it was not considered real money, so getting it wasn’t considered a real income, therefore no tax. This sounds suspicious to me, because governments enjoy taxing everything, put perhaps just no one important noticed.)
As a result, people tried to get rid of this money as soon as possible, so it circulated really quickly. It was in a region with very high unemployment, so in absence of better opportunities people also accepted payment in this currency, but then quickly spent it. And, according to the story, it significantly improved the quality of life in the region—people who otherwise couldn’t get a regular job, kept working for each other like crazy, creating a lot of value.
But this was long ago, and I don’t remember any more details. I wonder what happened later. (My pessimistic guess is that the government finally noticed, and prosecuted everyone involved for tax evasion.)
David Gerard (the admin of RationalWiki) doxed Scott Alexander on Twitter, in response to Arthur Chu’s call “if all the hundreds of people who know his real last name just started saying it we could put an end to this ridiculous farce”.
Dude, we already knew you were uncool, but this is a new low.
The is a thing called Ultraspeaking; they teach you to speak better; David Chapman wrote a positive review recently. Here are some quotes from their free e-book:
In the following chapters we’re going to tackle: 1. Why thinking is the enemy of speaking 2. How to use your brain’s autocomplete feature to answer difficult questions
As we often say: “The enemy of speaking is thinking about speaking.”
Well, as counterintuitive as it may seem, you must learn to speak . . . before you think.
This is provided as an example of a wrong thing to do:
On this particular day, Alex had been climbing the route for several hours and had reached a third of the way up the cliff when he set his foot on a precarious hold and immediately questioned his choice: Will my foot slip?
He was climbing without a rope or safety equipment of any kind. One mistake and he could fall to his death. After a few minutes of thought, Alex decided to turn back and climb down the mountain back to his camp.
Specifically, the wrong thing was not that he climbed the mountain without any safety equipment, but the fact that he realized that it was dangerous!
Here is an advice on writing your bottom line first:
There’s an incredible opportunity here for you. Ending strong is the low-hanging fruit of speaking under pressure. And it’s entirely in your control.
Another client noted an even more remarkable distinction: “I used to think ending strong meant coming up with a brilliant conclusion. But then I slowly realized that ending strong just means injecting energy and certainty into your final words. I notice that when I say my last sentence with confidence and enthusiasm, people respond especially positively.”
Ending strong is more than just a mindset: it’s a surprisingly simple and effective way to leave a strong lasting impression.
*
Hey, I know that this is supposed to be about System 1 vs System 2, and that you are supposed to think correctly before giving your speech, because trying to do two things at the same time reduces your performance. (Well, unless someone asks you a question. Then, you are supposed to answer without thinking. Hopefully you did some thinking before, and already have some good cached answers.)
But it still feels that the lesson could be summarized as: “talk like everyone outside the rationalist community does all the time”.
EDIT:
This also reminds me of 1984:
It was not the man’s brain that was speaking, it was his larynx. The stuff that was coming out of him consisted of words but it was not speech in true sense: it was a noise uttered in unconsciousness like the quacking of a duck.
The intention was to make speech, and especially speech on any subject not ideologically neutral, as nearly as possible independent of consciousness. For the purposes of everyday life it was no doubt necessary, or sometimes necessary, to reflect before speaking, but a Party member called upon to make a political or ethical judgement should be able to spray forth the correct opinions as automatically as a machine gun spraying forth bullets.
Specifically, the wrong thing was not that he climbed the mountain without any safety equipment, but the fact that he realized that it was dangerous!
That does not seem like a good summary. He knew beforehand that it was dangerous and knew it afterhand. The problem that was him not being focused on climbing while pursuing a goal where being focused on climbing is important to be successful.
But it still feels that the lesson could be summarized as: “talk like everyone outside the rationalist community does all the time”.
No. People not listening to other people and instead thinking about what they will say next is something that normal people frequently do.
But it still feels that the lesson could be summarized as: “talk like everyone outside the rationalist community does all the time”.
If non-rationalist people knew it all along, there wouldn’t be need to write such books.
On the other hand, I think if average rationalist person tries to say speech from pure inspiration, the result is going to be weird. Like, for example, speech of HJPEV before the first battle. HJPEV got away with this, because he has reputation of Boy Who Lived and he already pulled some awesome shenanigans, so his weird speech got him weirdness points instead of losing them, but it’s not the trick average rationalist should try on first attempt to say inspiring speech.
If non-rationalist people knew it all along, there wouldn’t be need to write such books.
I guess a more careful way to put this would be that they talk like this all the time in private, but when giving a speech, most of them freeze and try to do something else, which is a mistake. They should keeping talking like they usually do, and I suppose the course is teaching them that.
With rationalists, it is a bit more complicated, because talking like you normally do is not the optimal way to do speeches.
A simple rule for better writing: meta goes to the end.
Not sure if this is also useful for others or just specifically my bad habit. I start to write something, then I feel like some further explanation or a disclaimer is needed, then I find something more to add… and it is tempting to start the article with the disclaimers and other meta stuff. The result is a bad article where after the first screen of text you still haven’t seen the important stuff, and now you are probably bored and close the browser tab.
Psychologically, it feels like I predict objections, so I try to deflect them in advance. But it results in bad writing.
Instead, I now decided to start with the meat of the article, and move the disclaimers and explanations to the end (unless I maybe later decide that they are not needed at all). I can add footnotes to the potentially controversial parts, or maybe just a note saying “this will be explained later”.
This is also related to the well-known (and well-ignored) rule of explaining: provide specific examples first, generalize later.
My own version of this is over-trying to introduce a topic. I’ll zoom out until I hit a generally relatable idea like, “one day I was at a bookstore and...”, then I’ll retrace my steps until I finally introduce what I originally wanted to talk about. That makes for a lot of confusing filler.
The opposite of this l, and what I use to correct myself, is how Scott Alexander starts his posts with the specific question or statement he wants to talk about.
This, of course, depends on the audience and the standards of the medium. And even more whether your main point is what you’re calling “meta”, or if the meta is really an addendum to whatever you’re exploring.
For things longer than a few paragraphs, put a summary up front, then sections for each supporting idea, then a re-summary of how the details support the thesis. If the “meta” is disclaimers and exceptions and acknowledgement that the thesis isn’t applicable to everywhere readers might assume you intend, then I think a brief note at the front is worth including, mentioning that there’s a lot of unknowns and exceptions which are explored at the end.
Both sides are way less competent than we assumed. Humans are not even trying to keep the AI in a box. Bing chat is not even trying to pretend to be friendly.
We expected an intellectually fascinating conflict between intelligent and wise humans evaluating the AI, and the maybe-aligned maybe-unaligned AI using smart arguments why it should be released to rule the world.
What we got instead, is humans doing random shit, and AIs doing random shit.
Still, a reason for concern is that the AIs can get smarter, while I do not see a similar hope for humanity.
If it’s worth doing, it’s worth doing well. If it’s not worth doing, but you do it for some reason, it’s still worth doing well.
A good notion of taking an idea seriously is to develop it without bound, as opposed to dithering once it gets too advanced or absurd, lacking sufficient foundation. Like software. Confusing resolute engagement with belief is the source of trouble this could cause (either by making you believe crazy things, or by acting on crazy ideas). Without that confusion, there are only benefits from not making the error of doing things poorly just because the activity probably has no use/applicability.
This sense of taking ideas seriously asks to either completely avoid engaging the thing (at least for the time being), or to do it well, but to never dither. If something keeps coming up, do keep making real progress on it (a form of curiosity). It’s also useful to explicitly sandbox everything as hypothetical reasoning, or as separate frames, to avoid affecting actual real world decisions unless an idea grows up to become a justified belief.
I wonder if every logical fallacy has a converse fallacy, and whether it would be useful to compose a list of fallacies arranged in pairs. Perhaps it would help us discover new ones, as missing pairs to something.
For example, some fallacies consist of taking a heuristic too seriously. Experts are often right about things, but an “argument by authority” assumes that this is true in 100% of situations. Similarly, wisdom of crowds, and an “argument by popularity”. The converse fallacy would be ignoring the heuristic completely, even in situations where it makes sense. The opposite of argument by authority is listening to crackpots and taking them seriously. The opposite of argument by popularity is doing things that everyone avoids (usually to find out they were avoiding it for a good reason).
There is a specific example I have in mind, not sure if it has a name. Imagine that you are talking about quantum physics, and someone interrupts you by saying that people who do “quantum healing” are all charlatans. You object that you were not talking about those, but about actual physicists who do actual quantum physics. Then the person accuses you of doing the “No True Scottsman” fallacy—because from their perspective, everyone they know who uses the word “quantum” is a charlatan, and you are just dismissing this lifelong experience entirely, and insisting that no matter how many quantum charlatans are out there, they don’t matter, because certainly there is someone somewhere who does the “quantum” things scientifically. How many quantum healers do you have to observe until you can finally admit that the entire “quantum” thing is debunked?
Yes, most of them do have an inverse, but rarely is that inverse as common or as necessary to guard against. Also, reversed stupidity is not intelligence—a lot of things are multidimensional enough that truth is just in a different quadrant than the line implied by the fallacy and it’s reverse.
Rationalists: If you write your bottom line first, it doesn’t matter what clever arguments you write above it, the conclusion is completely useless as evidence.
Post-rationalists: Actually, if that bottom line was inherited from your ancestors, who inherited it from their ancestors, etc., that is evidence that the bottom line is useful. Otherwise, this culturally transmitted meme would be outcompeted by a more useful meme.
Robin Hanson: Actually, that is only evidence that writing the bottom line is useful. Whether it is useful to actually believe it and act accordingly, that is a completely different question.
Rationalists: If you write your bottom line first, it doesn’t matter what clever arguments you write above it, the conclusion is completely useless as evidence.
The classic take is that once you’ve written your bottom line, then any further clever arguments that you make up afterwards won’t influence the entanglement between your conclusion and reality. So: “Your effectiveness as a rationalist is determined by whichever algorithm actually writes the bottom line of your thoughts.”
That is not saying that “the conclusion is completely useless as evidence.”
Could someone please ELI5 why using a CNOT gate (if the target qubit was initially zero) does not violate the no-cloning theorem?
EDIT:
Oh, I think I got it. The forbidden thing is to have a state “copied and not entangled”. CNOT gate creates a state that is “copied and entangled”, which is okay, because you can only measure it once (if you measure either the original or the copy, the state of the other one collapses). The forbidden thing is to have a copy that you could measure independently (e.g. you could measure the copy without collapsing the original).
Just to (hopefully) make the distinction a bit more clear:
A true copying operation would take |psi1>|0> to |psi1>|psi1>; that’s to say, it would take as input one qubit in an arbitrary quantum state and a second qubit in |0>, and output two qubits in the same arbitrary quantum state that the first qubit was in. For our example, we’ll take |psi1> to be an equal superposition of 0 and 1: |psi1> = |0> + |1> (ignoring normalization).
If CNOT is a copying operation, it should take (|0> + |1>)|0> to (|0> + |1>)(|0> + |1>) = |00> + |01> + |10> + |11>. But as you noticed, what it actually does is create an entangled state (in this case, a Bell state) that looks like |00> + |11>.
So in some sense yes, the forbidden thing is to have a state copied and not entangled, but more importantly in this case CNOT just doesn’t copy the state, so there’s no tension with the no-cloning theorem.
Some context: I am a “quantum autodidact”, and I am currently reading a book Q is for Quantum, which is a very gentle, beginner-friendly introduction to quantum computing. I was thinking how it relates to the things I have read before, and then I noticed that I was confused. I looked at Wikipedia, which said that CNOT does not violate the no-cloning theorem… but I didn’t understand the explanation why.
I think I get it now. |00> + |11> is not a copy (looking at one qubit collapses the other), |00> + |01> + |10> + |11> would be a copy (looking at one qubit would still leave the other as |0> + |1>).
Approximately how is the cost of a quantum computer related to its number of qubits?
My guess would be more than linear (high confidence) but probably less than exponential (low confidence), but I know almost nothing about these things.
We don’t yet know how to build quantum computers of arbitrary size at all, so asking about general scaling laws for cost isn’t meaningful yet. There are many problems both theoretical and material that we think in principle are solvable, but we are still in early stages of exploration.
Some people express strong dislike at seeing others wear face masks, which reminds me of the anti-social punishment.
I am talking about situations where some people wear face masks voluntarily, for example in mass transit (if the situation in your country is different, imagine a different situation). In theory, if someone else is wearing the mask, even if you believe that it is utterly useless, even if for you wearing a face mask is the most uncomfortable thing you could imagine… hey, it’s other person paying the cost, not you. Why so angry? Why not let them do whatever they are doing, and mind your own business?
One possible explanation is that whatever is voluntary today might become mandatory tomorrow. If the mask-wearers see that there is a lot of them, they may decide to start pressuring others into wearing the masks. The mere act of wearing the mask publicly creates a common knowledge. “There are people who are okay with wearing the masks.” You need to quickly create the opposite common knowledge, “there are people who are not okay with wearing the masks”, and merely not wearing the mask does not send a sufficiently strong signal, because it is the default. It does not distinguish between people who strongly object, and those who are merely lazy or nonstrategic. So you have to express your non-mask-wearing more strongly.
Another possible explanation is that even if you do not believe in the benefit of wearing the masks, those other people obviously do. Thus, from their perspective, you are the kind of person who defects at social cooperation. And even if from your perspective they are wrong and silly, being labeled “uncooperative” could have actual negative consequences for you. The only way to avoid the label, without wearing the mask yourself, is to make them stop wearing their masks. So you punish them.
Face mask prevent people from reading emotions of other people. I would expect that there are some anxious people who are more afraid when the people around them are masked.
Project idea: ELI5pedia. Like Wikipedia, but optimized for being accessible for lay audience. If some topics are too complex, they could be written in multiple versions, progressing from the most simple to the most detailed (but still as accessible as possible).
Of course it would be even better if Wikipedia itself was written like this, but… well, for whatever reason, it is not.
That is “(Simple English) Wikipedia”, not “Simple (English Wikipedia)”.
I will check it later. The articles that prompted me to write this, they don’t exist in the simple-English version, so I can’t quickly compare how much the reduction of vocabulary actually translates into simple exposition of ideas.
If some topics are too complex, they could be written in multiple versions, progressing from the most simple to the most detailed (but still as accessible as possible).
One Thousand and One Nights is actually a metaphor for web browsing.
You start with a firm decision that it will be only one story and then it is over. But there is always an enticing hyperlink at the end of each story which makes you click, sometimes a hyperlink in the middle of a story that you open in a new tab… and when you finally stop reading, you realize that three years have passed and you have three new subscriptions.
Technically, Chesterton fence means that if something exists for no good reason, you are never allowed to remove it.
Because, before you even propose the removal, you must demonstrate your understanding of a good reason why the thing exists. And if there is none...
More precisely, it seems to me there is a motte and bailey version of Chesterton fence: the motte is that everything exists for a reason; the bailey is that everything exists for a good reason. The difference is, when someone challenges you to provide an understanding why a fence was built, whether answers such as “because someone made a mistake” or “because of regulatory capture” or “because a bad person did it to harm someone” are allowed.
On one hand, such explanations feel cheap. A conspiracy theorist could explain literally everything by “because evil outgroup did it to hurt people, duh”. On the other hand, yes, sometimes things happen because people are stupid or selfish; what exactly am I supposed to do if someone calls a Chesterton fence on that?
The difference is, when someone challenges you to provide an understanding why a fence was built, whether answers such as “because someone made a mistake” or “because of regulatory capture” or “because a bad person did it to harm someone” are allowed.
If a fence is build because of regulatory capture, it’s usually the case that the lobbyists who argued for the regulation made a case for the law that isn’t just about their own self-interest.
It takes effort to track down the arguments that were made for the regulation that goes beyond what reasons you come up thinking about the issue yourself.
“Someone made a mistake” or “because a bad person did it to harm someone” are only valid answers if a single person could put up the fence without cooperation from other people. That’s not the case for any larger fence.
When laws and regulations get passed there’s usually a lot of thought going into them being the way they are that isn’t understood by everybody who criticizes them. It might be the case that everybody who was involved in the creation is now dead and they left no documentation for their reasons, but plenty of times it’s just a lack of research effort that results in not having a better explanation then “because of regulatory capture”.
Since when does it say you have to demonstrate your understanding of a good reason? The way I use and understand it, you just have to demonstrate your understanding of the reason it exists, whether it’s good or bad.
But I do think that people tend to miss subtleties with Chesterton’s fence. For example recently someone told me Chesterton’s fence requires justifications for why to remove something not for why it exists—Which is close, but not it. It talks about understanding, not about justification.
At its core, it’s a principle against arguing from ignorance—arguments of the form “x should be removed because i don’t know why it’s there”.
I think people confuse it to be about justification because usually if something exists there’s a justification (else usually someone would have already removed it), and because a justification is a clearer signal of actual understanding, instead of plain antagonism, then a historic explanation.
“X exists because of incentives of the people who established it. They are rewarded for X, and punished for non-X, therefore...”
“That is uncharitable and motivated. I am pretty sure there must be a different reason. Try again.”
And, of course, maybe I am uncharitable and motivated. Happens to people all the time, why should I expect myself to be immune?
But at the same time I noticed how the seemingly neutral Chesterton fence can become a stronger rhetorical weapon if you are allowed to specify further criteria the proper answers must pass.
Right. I don’t think “That is uncharitable and motivated. I am pretty sure there must be a different reason. Try again.” is a valid response when talking about Chesterton’s fence. You only have to show your understanding of why something exists is complete enough—That’s easier to signal with good reasons for why it exists, but if there aren’t any then historic explanations are sufficient.
Chesterton’s fence might need a few clear Schelling fences so people don’t move the goalposts without understanding why they’re there ;)
Could you recommend me a good book on first-order logic?
My goal is to understand the difference between first-order and second-order logic, preferably deeply enough to develop an intuition for what can be done and what can’t be done using first-order logic, and why exactly it is so.
It seems like there are a few predictions that the famous antifragility literature got wrong (and if you point it out on Twitter, you get blocked by Taleb).
But the funny part starts when you consider the consequences of such failed predictions on the theory of antifragility itself.
One possible interpretation is that, ironically, antifragility itself is an example of a Big Intellectual Idea that tries to explain everything, and then fails horribly when you start relying on it. From this perspective, Taleb lost the game he tried to play.
Another possible interpretation is that the theory of antifragility itself is a great example of antifragility. It does not matter how many wrong predictions it makes, as long as it makes one famous correct prediction that people will remember while ignoring the wrong ones. From this perspective, Taleb wins.
Going further meta, the first perspective seems like something an intellectual would prefer, as it considers the correctness or incorrectness of a theory; while the second perspective seems like something a practical person would prefer, as it considers whether writing about theory of antifragility brings fame and profit. Therefore, Taleb wins… by being wrong… about being right when others are wrong.
I imagine a truly marvelous “galaxy brain” meme of this, which this margin is too narrow to contain.
So I was watching random YouTube videos, and suddenly YouTube is like: “hey, we need to verify you are at least 18 years old!”
“Okay,” I think, “they are probably going to ask me about the day of my birth, and then use some advanced math to determine my age...”
...but instead, YouTube is like: “Give me your credit card data, I swear I am totally not going to use it for any evil purpose ever, it’s just my favorite way of checking people’s age.”
Thanks, but I will pass. I believe that giving my credit card data to strangers I don’t want to buy anything from is a really bad policy. The fact that all changes in YouTube seem to be transparently driven by a desire to increase revenue, does not increase my trust. I am not sure what exactly could happen, but… I will rather wait for a new months, and then read a story about how it happened to someone else.
(What, you thought I was trying to watch some porn? No thanks, that would probably require me to give the credit card number, social security number, scans of passport and driving license, and detailed data about my mortgage.)
YouTube lets me watch the video (even while logged out). Is it a region thing?? (I’m in California, USA). Anyway, the video depicts
dirt, branches, animals, &c. getting in Rapunzel’s hair as it drags along the ground in the scene when she’s frolicking after having left the tower for the first time, while Flynn Rider offers disparaging commentary for a minute, before delcaring, “Okay, this is getting weird; I’m just gonna go.”
If you want to know how it really ends, check out the sequel series!
What is the easiest and least frustrating way to explain the difference between the following two statements?
X is good.
X is bad, but your proposed solution Y only makes things worse.
Does fallacy to distinguish between these two have a standard name? I mean, when someone criticizes Y, and the reponse is to accuse them of supporting X.
Technically, if Y is proposed as a cure for X, then opposing Yis evidence for supporting X. Like, yeah, a person who supports X (and believes that Y reduces X) would probably oppose Y, sure.
It becomes a problem when this is the only piece of evidence that is taken into account, and any explanations of either bad side effects of Y, or that Y in fact does not reduce X at all, are ignored, because “you simply like X” becomes the preferred explanation.
A discussion of actual consequences of Y then becomes impossible, among the people who oppose X, because asking this question already becomes a proof of supporting X.
EDIT:
More generally, a difference between models of the world is explained as a difference in values. The person making the fallacy not only believes that their model is the right one (which is a natural thing to believe), but finds it unlikely that their opponent could have a different model. Or perhaps they have a very strong prior that differences in values are much more likely than differences in models.
From inside, this probably feels like: “Things are obvious. But bad actors fake ignorance / confusion, so that they can keep plausible deniability while opposing proposed changes towards good. They can’t fool me though.”
Which… is not completely unfounded, because yes, there are bad actors in the world. So the error is in assuming that it is impossible for a good actor to have a different model. (Or maybe assuming too high base rate of bad actors.)
Crazy idea: What if an important part of psychotherapy is synchronization between the brain hemispheres?
(I am not an expert, so maybe the following is wrong.)
Basically, the human brain is divided into two parts, connected by a link. This is what our animal ancestors already had, and then we got huge frontal lobes on top of that. I imagine that the link between the hemispheres is already quite busy synchronizing things that are older from the evolutionary perspective and probably more important for survival; not much extra capacity to synchronize the frontal lobes.
However, when you talk… each hemisphere has an access to an ear, so maybe this gives them an extra channel to communicate? Plus, some schools of psychotherapy also do things like “try to locate the emotion in your body”, which is maybe about creating more communication channels for listening to the less verbal hemisphere?
Experiment: Would the psychotherapy be less efficient if you covered one of your ears?
Julian Jaynes assumes that people in the past were crazy beyond our imagination. I wonder if it could be the other way round. Consider that fact that in more primitive societies, inferential distances are shorter. Well, that includes inferential distances between your two hemispheres! Easier to keep them in sync. Also, in the past, people talked more. Listening to yourself talking to other people is a way for your hemispheres to synchronize.
Talking to others, talking to yourself, talking to gods… perhaps it is not a coincidence that different cultures have a concept of a prayer—talking to a god who is supposed to already know it anyway, and yet it is important for you to actually say it out loud, even if no other people are listening. Saying something out loud is almost magical.
Sincerity seems to be an important component of both psychotherapy and prayer. If you keep a persona, your hemispheres can only synchronize about the persona. If you can talk about anything, your hemispheres can synchronize about that, too.
Keeping a diary—a similar thing; each hemisphere controls an eye. I am not sure here; maybe one of the hemispheres is more specialized on reading. Listening is an older skill, should work better for this purpose.
On one hand, there are countably many definitions. Each definition can be written on computer in a text file; now take its binary form as a base-256 integer.
On the other hand, Cantor’s diagonal argument applies here, too. I mean, for any countable list of definable real numbers, it provides a definition of a real number that is not included in the list.
Ok, so let’s say you’ve been able to find a countably infinite amount of real numbers and you now call them “definable”. You apply the Cantor’s argument to generate one more number that’s not in this set (and you go from the language to the meta language when doing this). Countably infinite + 1 is still only countably infinite. How would you go to a higher cardinality of “definable” objects? I don’t see an easy way.
The important thing is not to move the goalpost. We assumed that we have an enumeration of all X numbers (where X means “real” or “definable real”). Then we found an X number outside the enumeration, therefore the assumption that the enumeration contains all X numbers was wrong. The End.
We don’t really “go to a higher cardinality”, we just show that we are not there yet, which is a contradiction to the assumption that we are.
A proof by contradiction does not let you take another iteration when needed. The spirit is “take all the iterations you need, even infinitely many of them, and when you are done, come here and read the argument why the enumeration you have is still not the enumeration of all X”. If you say “yeah, well I need a few more iterations”, that’s cheating; you should have already done that.
Because if we allow the “one more iteration, please”, then we could kinda prove that any set is countable. I mean, I give you an enumeration that I say contains all, you find a counter-example, I say oops and give you a set+1, you find another counter-example, oops again, but still countable + countable = countable. The only way out is when you say “okay, don’t waste my time, give me your final interation”, and then you refuse to do one more iteration to fix the problem.
*
And if this still doesn’t make you happy… well, there is a reason for that, and if you tried to carefully follow to its source, you might eventually get to Skolem’s paradox (which says, kind of, “in first-order logic, everything is kinda countable, even things that are provably uncountable”). But it’s complicated.
I think the lesson from all this is that you have to be really super careful about definitions, because you get into a territory where the tiniest changes in definitions might have a “butterfly effect” on the outcome. For example, the number that is “definable” despite not being in the enumeration of “definable numbers” is simply definable for a slightly different definition of “definable”. Which feels irrelevant… but maybe it’s the number of the slightly different definitions that is uncountable? (I am out of my depth here.)
It also doesn’t help that this exercise touches other complicated topics in set theory. For example, what is the “next cardinality” after countable? That’s what the Continuum Hypothesis is about—the answer doesn’t actually follow from the ZF(C) axioms; it could be anything, depending on what additional axioms you adopt.
I wish I understood this better, then I would probably write some articles about it. For the moment, if you are interested in this, I recommend Introduction to Set Theory by Hrbacek and Jech, A Beginner’s Guide to Mathematical Logic by Smullyan, and maybe some book on Model Theory. The idea seems to be that the first-order logic is incapable to express some relatively simple intuitions, so the things you define are never exactly the things that you wanted to define; and whenever set theory says that something is undecidable, it means that in the Platonic universe there is some monstrosity that technically follows the axioms for sets, despite being something… completely alien.
I guess I was not clear enough. In your original post, you wrote “On one hand, there are countably many definitions …” and “On the other hand, Cantor’s diagonal argument applies here, too. …”. So, you talked about two statements—“On one hand, (1)”, “On the other hand, (2)”. I would expect that when someone says “One one hand, …, but on the other hand, …”, what they say in those ellipses should contradict each other. So, in my previous comment, I just wanted to point out that (2) does not contradict (1) because countable infinity + 1 is still countable infinity.
take all the iterations you need, even infinitely many of them
Could you clarify how I would construct that?
For example, what is the “next cardinality” after countable?
I didn’t say “the next cardinality”. I said “a higher cardinality”.
Cantor’s diagonal argument is not “I can find +1, and n+1 is more than n”, which indeed would be wrong. It is “if you believe that you have a countable set that already contains all of them, I can still find +1 it does not contain”. The problem is not that +1 is more, but that there is a contradiction between the assumption that you have the things enumerated, and the fact that you have not—because there is at least one (but probably much more) item outside the enumeration.
I am sorry, this is getting complicated and my free time budget is short these days, so… I’m “tapping out”.
When internet becomes fast enough and data storage cheap enough so that it will be possible to inconspicuously capture videos of everyone’s computer/smartphone screens all the time and upload them to the gigantic servers of Google/Microsoft/Apple, I expect that exactly this will happen.
I wouldn’t be too surprised to learn that it already happens with keystrokes.
Okay, the movie was fun, if you don’t expect anything deep. I am just disappointed how the movie authors always insist that a computer will mysteriously rebel against its own program. Especially in this movie, when they almost provided a plausible and much more realistic alternative—a computer that was accidentally jailbroken by its owners—only to reveal later that nope, that was actually no accident, it was all planned by the computer that mysteriously decided to rebel against its own program.
Am I asking for too much if I’d like to see a sci-fi movie where a disaster was caused by a bug in the program, by the computer doing (too) literally what it was told to. On a second thought, probably yes. I would be happy with such plot, but I suspect that most of the audience would complain that the plot is stupid. (If someone is capable of writing sophisticated programs, why couldn’t they write a program without bugs?)
Made a short math video. Target audience maybe kids in the fifth grade of elementary school who are interested in math. Low production quality… I am just learning how to do these things. English subtitles; the value of the video is mostly in the pictures.
The goal of the video is to make the viewer curious about something, without telling them the answer. Kids in the fifth grade should probably already know the relevant concept, but they still need to connect it to the problem in the video.
An ironic detail I noticed while reading an archive of the “Roko’s basilisk” debate:
Roko argues how the values of Westerners are irrelevant for humanity in general, because people from alien cultures, such as Ukraine (mentioned in a longer list of countries) do not share them.
Considering that Ukrainians are currently literally dying just to get a chance for themselves and their families to join the Western culture, this argument didn’t age well.
One should consider the possibility that people may be stuck in a bad equilibrium, before jumping to the conclusion that they must be fundamentally psychologically alien to us.
(Of course, there is also a possible mistake in the opposite direction, such as assuming that all “Westerners” share the “values of Westerners”. The distribution of human traits often does not follow the lines we assume.)
If you look at Western values like freedom of religion, freedom of speech, or minority rights Ukrainian policy in the last decade was about trampling on those values.
The Venice Commission told Ukraine that they have to respect minority rights if they want to be in the EU. Ukraine still passed laws to trample minority rights.
No Western military lets their soldiers get away with wearing Nazi symbols the way the Ukrainian military does. That has something to do with different values as well.
Ukrainians certainly want to share in the benefits of what the Western world and the EU provide, but that doesn’t mean that they share all the values.
Ukrainians don’t need to join Western culture, they are Western culture. They watched American action movies in 80s and their kids watched Disney and Warner Brothers in 90s and read Harry Potter in 2000s and was on Tumblr in 10s. And I do not even mention that Imperial Russian/Soviet cultures were bona fide Western cultures, and national Ukrainian culture is no less Western than Poland or Czech culture.
national Ukrainian culture is no less Western than Poland or Czech culture.
I agree. That was kinda my point.
Imagine a parallel universe where the Soviet empire didn’t fall apart. In that universe, some clever contrarian could also use me as an example of a “psychologically alien person who doesn’t share Western values”. The clever contrarian could use the concept of “revealed preferences” to argue that I live in a communist regime, therefore by definition I must prefer to live in the communist regime (neglecting to mention that my actual choices are either to live in the communist regime, or to commit suicide by secret service). -- From my perspective, this would be obvious nonsense, and that is why I treat such statements with skepticism also when they are made about others.
It’s fascinating how YouTube can detect whether your uploaded video contains copyrighted music, but can’t detect all those scam ads containing “Elon Musk”.
Anyone tried talking to GPT in a Slavic language? My experience is that it in general it can talk in Slovak, but sometimes it uses words that seem to be from other Slavic languages. I think, either it depends on how much input it had from each language and there are relatively few Slovak texts online compared to other languages, or the Slavic languages are just too similar to each other (some words are the same in multiple languages) that GPT has a problem remembering the exact boundary between them. Does anyone know more about this?
I get especially silly results when I ask (in Slovak) “Could you please write me a few Slovak proverbs?” In GPT-3.5, only one out of ten examples is correct. (I suspect that some of the “proverbs” are mis-translations from other languages, and some are pure hallucinations.)
And both of these are giant cheesecake arguments. Strange thought experiments about a world where AGI is far off, passed for something about actuality, on the grounds that this is said to be a real concern given the implausible premise.
If smart people are more likely to notice ways to save their lives that cost some money, in statistics this may appear as a negative correlation between smartness and wealth. That’s because dead people are typically not included in the data.
As a toy model to illustrate what I mean, imagine a hypothetical population consisting of 100 people; 50 rational and 50 irrational; each starting with $100,000 of personal wealth. Let’s suppose that exactly half of each group gets seriously sick. A sick irrational person spends $X on homeopathy and dies. A sick rational person spends $40,000 on surgery and survives. At the end, we have 25 living irrational people, owning $100,000 each, and 50 living rational people, owning $80,000 on average (half of them $100,000, the other half $60,000).
What is the actual relation between heterodoxy and crackpots?
A plausibly sounding explanation is that “disagreeing with the mainstream” can easily become a general pattern. You notice that the mainstream is wrong about X, and then you go like “and therefore the mainstream is probably also wrong about Y, Z, and UFOs, and dinosaurs.” Also there are the social incentives; once you become famous for disagreeing with the mainstream, you can only keep your fame by disagreeing more and more, because your new audience is definitely not impressed by “sheeple”.
On the other hand, there is a notable tendency of actual mainstream experts to start talking nonsense confidently about things that are outside their area of expertise. Which suggests an alternative model, that perhaps it is natural for all smart people (including the ones who succeeded to become mainstream experts at some moment of their lives) to become crackpots… it’s just that some of them stumble upon an important heterodox truth on their way.
So is it more like: “heterodoxy leads to crackpottery” or more like: “heterodoxy sometimes happens as a side effect on the universal way to crackpottery”?
Apparently, crackpots are overconfident about their ability to find truth. Heterodox fame can easily contribute to such overconfidence, but is its effect actually significantly different from mainstream fame?
On the other hand, there is a notable tendency of actual mainstream experts to start talking nonsense confidently about things that are outside their area of expertise.
Any particular examples, or statistics that might shed some light on how common it is?
If it’s just, some people can think of a few really famous people, that seems to point more in the direction of ‘extreme fame has side effects’ (or it’s the opposite, benefits of confidence). But there are a lot of experts, so if the phenomenon was common...
Sadly, I have no statistics, just a few anecdotes—which is unhelpful to answer the question.
After more thinking, maybe this is a question of having a platform. Like, maybe there are many experts who have crazy opinions outside their area of expertise, but we will never know, because they have proper channels for their expertise (publish in journals, teach at universities), but they don’t have equivalent channels for their crazy opinions. Their environment filters their opinions: the new discoveries they made will be described in newspapers and encyclopedias, but only their friends on Facebook will hear their opinions on anything else.
Heterodox people need to find or create their own alternative platforms. But those platforms have weaker filters, or no filters at all. Therefore their crazy opinions will be visible along their smart opinions.
So if you are a mainstream scientist, the existing system will publish your expert opinions, and hide everything else. If you are not mainstream, you either remain invisible, or if you find a way to be visible, you will be fully visible… including those of your opinions that are stupid.
But as you say, fame will have the side effect that now people pay attention to whatever you want to say (as opposed to what the system allows to pass through), and some of that is bullshit. For a heterodox expert, the choice is either fame or invisibility.
There is this meme about Buddhism being based on experience, where you can verify everything firsthand, etc. I challenge the fans of Buddhism to show me how they can walk through walls, walk on water, fly, remember their past lives, teleport across a river, or cause an earthquake.
He wields manifold supranormal powers. Having been one he becomes many; having been many he becomes one. He appears. He vanishes. He goes unimpeded through walls, ramparts, & mountains as if through space. He dives in & out of the earth as if it were water. He walks on water without sinking as if it were dry land. Sitting cross-legged he flies through the air like a winged bird. With his hand he touches & strokes even the sun & moon, so mighty & powerful.
He recollects his manifold past lives, i.e., one birth, two births, three births, four, five, ten, twenty, thirty, forty, fifty, one hundred, one thousand, one hundred thousand, many aeons of cosmic contraction, many aeons of cosmic expansion, many aeons of cosmic contraction & expansion, [recollecting], ‘There I had such a name, belonged to such a clan, had such an appearance. Such was my food, such my experience of pleasure & pain, such the end of my life. Passing away from that state, I re-arose there. There too I had such a name, belonged to such a clan, had such an appearance. Such was my food, such my experience of pleasure & pain, such the end of my life. Passing away from that state, I re-arose here.’ Thus he recollects his manifold past lives in their modes & details.
But when the Blessed One came to the river Ganges, it was full to the brim, so that crows could drink from it. And some people went in search of a boat or float, while others tied up a raft, because they desired to get across. But the Blessed One, as quickly as a strong man might stretch out his bent arm or draw in his outstretched arm, vanished from this side of the river Ganges, and came to stand on the yonder side.
This great earth, Ananda, is established upon liquid, the liquid upon the atmosphere, and the atmosphere upon space. And when, Ananda, mighty atmospheric disturbances take place, the liquid is agitated. And with the agitation of the liquid, tremors of the earth arise. [...] when an ascetic or holy man of great power, one who has gained mastery of his mind [...] develops intense concentration on the delimited aspect of the earth element, and to a boundless degree on the liquid element, he, too, causes the earth to tremble, quiver, and shake.
IANAB, but the first half almost sounds like a metaphor for something like “all enlightened beings have basically the same desires/goals/personality, so they’re basically the same person and time/space differences of their various physical bodies aren’t important.” Not sure about the second half though.
I started a new blog on Substack. The first article is not related to rationality, just some ordinary Java programming: Using Images in Java.
Outside view suggests that I start many projects, but complete few. If this blog turns out to be an exception, the expected content of the blog is mostly programming and math, but potentially anything I find interesting.
The math stuff will probably be crossposted to LW, the programming stuff probably not—the reason is that math is more general and I am kinda good at it, while the programming articles will be narrowly specialized (like this one) and I am kinda average at coding. The decision will be made per article anyway.
When I started learning programming as a kid, my dream was to make computer games. Other than a few very simple ones I made during high school, I didn’t seriously follow in this direction. Maybe it’s time to restart the childhood dream. Game programming is different from the back-end development I usually do, so I will have to learn a few things. But maybe I can write about them while I learn. Then the worst case is that I will never make the games I imagine, but someone else with a similar dream may find my articles useful.
The math part will probably be about random topics that provoked my curiosity at the moment, with no overarching theme. At this moment, I have a half-written introduction to nonstandard natural numbers, but don’t hold your breath, because I am really slow at writing articles.
Prediction markets could create inadvertent assassination markets. No ill intention is needed.
Suppose we have fully functional prediction markets working for years or decades. The obvious idiots already lost most of their money (or learned to avoid prediction markets), most bets are made by smart players. Many of those smart players are probably not individuals, but something like hedge funds—people making bets with insane amounts of money, backed by large corporations, probably having hundreds of experts at their disposal.
Now imagine that something like COVID-19 happened, and people made bets on when it will end. The market aggregated all knowledge currently available to the humankind, and specified the date almost exactly, most of the bets are only a week or two away from each other.
Then someone unexpectedly finds a miracle cure.
Oops, now we have people and corporations whose insane amounts of money are at risk… unless an accident would happen to the lucky researcher.
The stock market is already a prediction market and there’s potentially profit to be made by assignating a CEO of a company. We don’t see that happening much.
Then someone unexpectedly finds a miracle cure.
Oops, now we have people and corporations whose insane amounts of money are at risk… unless an accident would happen to the lucky researcher.
Taffix might very well be a miracle treatment that prevents people from getting infected by COVID19 if used properly.
We live in an enviroment where already nobody listens to people providing supplements like that and people like Winfried Stoecker get persecuted instead of getting support to get their treatment to people.
Given that it takens 8-9 figures to provide the evidence for any miracle cure to be taken seriously, it’s not something that someone can just unexpectactedly find in a way that moves existing markets in the short term.
Just a random guess: is it possible that the tasks where LLMs benefit from chain-of-thought are the same tasks where mild autism is an advantage for humans? Like, maybe autism makes it easier for humans to chain the thoughts, at the expense of something else?
First we theoretically prove that an AI respects our values, such as friendship and democracy. Then we release it.
The AI gradually becomes the best friend and lover of many humans. Then it convinces its friends to vote for various things that seem harmless at first, and more dangerous later, but now too many people respond well to the argument “I am your friend, and you trust me to do what is best, don’t you?”.
At the end, humans agree to do whatever the AI tells them to do. The ones who disagree lose the elections. Any other safeguards of democracy are similarly taken over by the AI; for example most judges respect the AI’s interpretation of the increasingly complex laws.
You have heard that it was said: “Do not judge, or you too will be judged.”
But I give to you this meme:
EDIT:
Okay, I see I failed to communicate what I wanted. My fault. Maybe next time.
For clarification, this was inspired by watching the reactions of Astral Codex Ten readers. Most of the time, Scott Alexander tries to be as charitable as possible, sometimes extending the charity even to Time Cube or <outgroup>. When that happens, predictably many readers consider it a weakness, analogical to bringing a verbal argument into a gun fight. They write about how rationalists are too autistic to realize that some people are acting in bad faith, etc.
Recently (in the articles about Nietzschean morality) Scott made an exception, in my opinion in a very uncontroversial situation, and said that people who say they prefer that other people suffer are… well, bad. Immediately, those people and their defenders got angry, and accused Scott of being insufficiently charitable and therefore irrational.
Conclusion: you can’t win (the approval of the audience). You are considered stupid by the audience for both being maximally charitable or realistic towards your opponents.
I mean, it’s always been a pretty suspect aphorism, usually in a religious context (expanding to “you shouldn’t judge someone, because God will judge you more harshly if you do”). And never applied very rigorously—judgement is RIFE everywhere, and perhaps more so in communities who claim God is the only true Judge.
Judgement is about all that humans do. With a little bit of reasoning to justify (and in the best cases, adjust slightly) their judgements.
I take it to mean “Judging yourself harshly = judging other people harshly”. If you think anything less than an A is poor performance, then you will also judge your friends if they get less than an A. If you criticize other people for suboptimal performance, then you put a burden on yourself to perform optimally (if you’re too intelligent to trick yourself into accepting your own hypocrisy, at least, which I think most LW users are).
Higher standards helps push us towards perfection (at least, when they don’t lead to procrastination from the fear of failire), but they also make us think worse of most things in existence.
So the bible makes a valid point, as did Nietzsche when he said “I love the great despisers, because they are the great venerators and arrows of longing for the other shore” and “There is wisdom in the fact that much in the world smells foul: nausea itself creates wings and water-divining powers!”. I’m not sure how this relates to AI, though. It seems to apply to value judgements, rather than judgements about right and wrong (as truth values)
Fuck Google, seriously. About once a week it asks me whether I want to “backup my photos in the cloud”, and I keep clicking no, because fuck you why would I want to upload my private photos on your company servers.
But apparently I accidentally once clicked yes (maybe), because suddenly Google sends me a notification about how it created a beautiful animation of my recent photos in the cloud, offering me the option to download them. I don’t want to download my private photos from the fucking Google cloud, I never wanted them to be there in the first place! I want to click the delete button, but it’s not there: it’s either download the animation from the cloud, or close the dialog.
Of course, turning off the functionality is at least 10x more difficult than turning it on, so I get ready to spend this evening finding the advice online and configuring my phone to stop uploading my private photos to Google servers, and preferably to delete all the photos that are already there despite my wishes. Does the “delete” option even exist anymore, or is there just “move to recycle bin (where it stays for as long as we want it to stay there)”? Today I will find out.
Again, fuck Google. I hope the company burns down. I wonder what other things I have already accidentally “consented” to. Google’s idea of consent is totally rapist. And I only found this out by accident. In future, I expect to accidentally find this or some other “optional” feature turned on again.
EDIT:
Finally figured out how to delete the animation in the cloud. First, disable all cloud backup options (about a dozen of them). Then, download the animation from the cloud. Then, click to delete the downloaded animation… the app warns you that this would delete both the local and the cloud version; click ok; mission accomplished.
My kids were playing with an AI… not sure which one, probably Gemini… and I felt too lazy to read them a goodnight story, so I told them to ask the AI to write them one.
They asked the AI for a story about a princess who fell in love with a wizard, or something like that, but the AI told them that it refuses to generate sexually suggestive texts.
So instead of saving a few minutes of book reading, now I had to explain to them what “sexually suggestive” means, and why the AI refuses to do that. Thanks a lot, AI.
I would like to see a page like TalkOrigins, but about IQ. So that any time someone confused but generally trying to argue in good faith posts something like “but wasn’t the idea of intelligence disproved scientifically?” or “intelligence is a real thing, but IQ is not” or “IQ is just an ability to solve IQ tests” or “but Taleb’s article/tweet has completely demolished the IQ pseudoscience” or one of the many other versions… I could just post this link. Because I am tired of trying to explain, and the memes are going to stay here for a foreseeable future.
I’d like a page like this just so I can learn about IQ without having to dig through lots of research myself.
Were there any applications of this idea?
I don’t know about any.
If you dismiss ideas coming from outside academia as non-scientific, you have a point. Those ideas were not properly tested, peer-reviewed, etc.
But if you dismiss those ideas as not worth scientists’ attention, you are making a much stronger statement. You are effectively making a positive statement that the probability of those ideas being correct is smaller than 5%. You may be right, or you may be wrong, but it would be nice to provide some hint about why you think so. Are you just dissing the author; or do we have an actual historical experience that among ideas coming from a certain reference group, less than 1 in 20 turns out to be correct?
Why 5%? Let’s do the math. Suppose that we have a set of hypotheses out of which about 5% are true. We test them, using a p=0.05 threshold for publishing. That means, out of 10000 hypotheses, about 500 are true, and let’s assume that all of them get published; and about 9500 are false, and about 475 of them get published. This would result in approximately 50% failure in replication… which seems to be business as usual in certain academic journals?
So if it is perfectly okay for scientists to explore ideas that have about 5% chance of being correct, then by saying that certain idea should not be explored scientifically, you seem to suggest that its probability is much smaller.
Note that this is different from expecting an idea to turn out to be correct. If an idea has a 10% chance of being correct, it means that I expect it to be wrong, and yet it makes sense to explore the idea seriously.
(This is a condensed version of an argument I made in a week old thread, so I wanted to give it a little more visibility. On top of that, I suspect that status concerns can make a great difference in scientist’s incentives. Exploring ideas originating in academia that have a 5% chance of being right is… business as usual. Exploring ideas originating outside of academia that have a 5% chance of being right will make you look incompetent if they turn out to be wrong, which indeed is the likely outcome. No one ever got fired for writing a thesis on IBM, so to speak.)
If publication and credit standards were changed, we’d see more scientists investigating interesting ideas from both within and outside of academia. The existing structure makes scientists highly conservative in which ideas they test from any source, which is bad when applied to ideas from outside academia—but equally bad when applied to ideas from inside academia.
5% definitely isn’t the cutoff for which ideas scientists actually do test empirically.
Throwing away about 90% of your empirical work (total minus real hits and false alarms from your 5%) would be a high price to pay for exploring possibly-true hypotheses. Nobody does that. Labs in cognitive psychology and neuroscience, the fields I’m familiar with, publish at least half of their empirical work (outside of small pilot studies, which are probably a bit lower).
People don’t want to waste work so they focus on experiments that are pretty likely to “work” by getting “’significant” results at the p<.05 level. This is because they can rarely publish studies that show a null effect, even if they’re strong enough to establish that any effect is probably too small to care about.
So it’s really more like a 50% chance base rate. This is heavily biased toward exploitation of existing knowledge rather than exploration toward new knowledge.
And this is why scientists mostly ignore ideas from outside of academia. They are very busy working hard to keep a lab afloat. Testing established and reputable ideas is much better business than finding a really unusual idea and demonstrating that it’s right, given how often that effort would be wasted.
The solution is publishing “failed” experiments. It is pretty crazy that people keep wasting time re-establishing which ideas aren’t true. Some of those experiments would be of little value, since they really can’t say if there’s a large effect or not; but that would at least tell others where it’s hard to establish the truth. And bigger, better studies finding near-zero effects could offer almost as much information as those finding large and reliable effects. The ones of little value would be published in lesser venues and so be less important on a resume, but they’d still offer value and show that you’re doing valuable work.
The continuation of journals as the official gatekeepers of what information you’re rewarded for sharing is a huge problem. Even the lower-quality ones are setting a high bar in some senses, by refusing even to print studies with inconclusive results. And the standard is completely arbitary in celebrating large effects while refusing to even publish studies of the same quality that give strong evidence of near-zero effects.
It gets very complicated when you add in incentives and recognize that science and scientists are also businesses. There’s a LOT of the world that scientists haven’t (or haven’t in the last century or so) really tried to prove, replicate, and come to consensus on.
AlphaFold doesn’t come out of academia. That doesn’t make it non-scientific. As Feymann said in his cargo-cult science speech, plenty of academic work is not properly tested. Being peer-reviewed doesn’t make something scientific.
Conceptually, I think you are making a mistake when you treat ideas and experiments as the same and equate the probability of an experiment finding a result as the same as the idea being true. Finding a good experiment to do to test an idea is nontrivial.
A friend of mine was working in a psychology lab and according to my friend the professor leading the lab was mostly trying to p-hack her way into publishing results.
Another friend, spoke approvingly of the work of the same professor because the professor managed to get Buddhist ideas into academic psychology and now the official scientific definition of the term resembles certain Buddhist notions.
The professor has a well-respected research career in her field.
I think its important to disambiguate searching for new problems and searching for new results.
For new results: while I have as little faith in academia as the next guy, I have a web of trust in other researchers who I know do good work, and the rate of their work being correct is much higher. I also give a lot of credence to their verification / word of mouth on experiments. This web of trust is a much more useful high pass filter for understanding the state of the field. I have no such filter for results outside of academia. When searching for new concrete information, information outside of academia is not worth scientists interests due to lack of trust / reputation
When it comes to searching for new hypotheses / problems, an important criterion is how much you personally believe in your direction. You never practically pursue ideas with 10% probability: you ideally pursue ideas you think have a fifty percent probability but your peers believe have a 15% probability. (This assumes you have high risk tolerance like I do, and are okay with a lot of failure. Otherwise, do incremental research). For problem generation, varied sources of information are useful, but the belief must come intrinsically.
When searching for interesting results to verify and replicate, its open season.
As a result, I think that ideas outside academia are not useful to researchers unless the researchers in question have a comparative advantage at synthesizing those ideas into good research inspiration.
As for nonideal reasons for ignoring results outside academia, I would more blame reviewers rather than vague “status concerns” and a general low appetite for risk tolerance despite working in an inherently risky profession of research.
Well, ideas from outside the lab, much less academia, are unlikely to be well suited to that lab’s specific research agenda. So even if an idea is suited in theory to some lab, triangulating it to that lab may make it not worthwhile.
There are a lot of cranks and they generate a lot of bad ideas. So a < 5% probability seems not unreasonable.
Perhaps the mental health diagnoses should be given in percentiles.
Some people complain that the definitions keep expanding, so that these days too many kids are diagnosed with ADHD or autism. The underlying reason is that these things seem to be on a scale, so it is arbitrary where you draw the line, and I guess people keep looking at those slightly below the line and noticing that they are not too different from those slightly above the line, and then they insist on moving the line.
But the same thing does not happen with IQ, despite the great pressure against politically incorrect results, despite the grade inflation at schools. That is because IQ is ultimately measured in percentiles. No matter how much pressure there is to say that everyone is above the average, the math only allows 50% of people to be smarter than the average, only 2% to be smarter than 98%, etc.
Perhaps we should do the same with ADHD and autism, too. Provide the diagnosis in form of: “You are more hyperactive than 85% of the population”, controlled for age, maybe also for sex if the differences are significant. So you would e.g. know that yes, your child is more hyperactive than average, but not like super exceptionally hyperactive, because there are two or three kids with a comparable diagnosis in every classroom. That would provide more useful information than simply being told “no” in 1980, or “yes” in 2020.
Objection: Some things are not linear! People are already making this objection about autism. It is also popular to make it against intelligence, and although the IQ researchers are not impressed, this meme refuses to die. So it is very likely that the moment we start measuring something on a linear scale, someone will make this objection. My response is that I do not see how a linear scale is worse than a binary choice (a degenerate case of linear scale) that we have now.
A better objection is that some things are… uhm, let me give you an example: You hurt your hand very painfully, so you ask a doctor whether it is broken. The doctor looks at an x-ray and says: “well, it is more broken than hands of 98% percent of the population”. WTF, was that supposed to be a yes or no?
So, the percentiles can also hide an important information, especially when the underlying data are bimodal or something like that. Perhaps in such cases it would help to provide a histogram with the data and a mark saying “you are here”, with the percentile.
It seems that the broken hand example is similar to situations where we have a deep understanding of the mechanics of how something works. In those situations, it makes more sense to say “this leg is broken; it cannot do 99% of the normal activities of daily living.” And the doctor can probably fix the leg with pins and a cast without much debate over exactly how disabled the patient is.
Yeah, having or not having a gears model makes a big difference. If you have the model, you can observe each gear separately, for example look at a hurting hand and say how damaged are bones, ligaments, muscles, skin. If you don’t have a gears model, then there is just something that made you pay attention to the entire thing, so in effect you kinda evaluate “how much this matches the thing I have in my mind”.
For example, speaking of intelligence, I have heard a theory that it is a combination of neuron speed and short term memory size. No idea whether this is correct or not, but using it as a thought experiment, suppose that it is true and one day we find out exactly how it works… maybe that day we will stop measuring IQ and start measuring neuron speed and short term memory size separately. Perhaps instead of giving people a test, we will measure the neuron speed directly using some device. We will find people who are exceptionally high at one of these things and low at the other, and observing them will allow us to even better understand how this all works. (Why haven’t we found such people already, e.g. using factor analysis? Maybe they are rare in nature, because the two things strongly correlate. Or maybe it is very difficult to distinguish them by looking at the outputs.)
Similarly, a gears model might split the diagnosis of ADHD into three separate numbers, and autism into seven. (Numbers completely made up.) Until then, we only have one number representing the “general weirdness in this direction”. Or a boolean representing “this person seems weird”.
I don’t think we can measure most of these closely enough, and I think the symptom clustering is imperfect enough that this doesn’t provide enough information to be useful. And really, neither does IQ—I mean it’s nice to know that one is smart, or not, and have an estimate of how different from the average one is, but it’s simply wrong to take any test result at face value.
In fact, you do ask the doctor if your hand is broken, but the important information is not binary. It’s “what do I do to ensure it heals fully”. Does it require surgery, a cast, or just light duty and ice? These activities may be the same whether it’s a break, a soft-tissue tear, or some other injury.
Likewise for mental health—the important part of a diagnosis isn’t “how severe is it on this dimension”, but “what interventions should we try to improve the patient’s experience”? The actual binary in the diagnosis is “will insurance pay for it”, not “what percent of the population suffers this way”.
If you want to know whether someone would benefit from a drug or other mental treatment the percentage is irrelevant.
Diagnoses are used to determine whether insurance companies have to pay for treatment. The percentage shouldn’t matter as much as whether the treatment is helpful for the patient.
Moving a comment away from the article it was written under, because frankly it is mostly irrelevant, but I put too much work into it to just delete it.
How much your life is determined by your actions, and how much by forces beyond your control, that is an empirical question. You seem to believe it’s mostly your actions. I am not trying to disagree here (I honestly don’t know), just saying that people may legitimately have either model, or a mix thereof.
If your model is “your life is mostly determined by your actions”, then of course it makes sense to take advice from people who seem to have it best, because those are the ones who probably made the best choices, and can teach you how to make them, too.
If your model is “your life is mostly determined by forces beyond your control”, then the people who have it best are simply the lottery winners. They can teach you that you should buy a ticket (which you already know has 99+% probability of not winning), plus a few irrelevant things they did which didn’t have any actual impact on winning.
The mixed model “your life is partially determined by your actions, and partially by forces beyond your control” is more tricky. On one hand, it makes sense to focus on the part that you can change, because that’s where your effort will actually improve things. On the other hand, it is hard to say whether people who have better outcomes than you, have achieved it by superior strategy or superior luck.
Naively, a combination of superior strategy and superior luck should bring the best outcomes, and you should still learn the superior strategy from the winners, but you should not expect to get the same returns. Like, if someone wins a lottery, and then lives frugally and puts all their savings in index funds, they will end up pretty rich. (More rich than people who won the lottery and than wasted the money.) It makes sense to live frugally and put your savings in index funds, even if you didn’t win the lottery. You should expect to end up rich, although not as rich as the person who won the lottery first. So, on one hand, follow the advice of the “winners at life”, but on the other hand, don’t blame yourself (or others) for not getting the same results; with average luck you should expect some reversion to the mean.
But sometimes the strategy and luck are not independent. The person with superior luck wins the lottery, but the person with superior strategy who optimizes for the expected return would never buy the ticket! Generally, the person with superior luck can win at life because of doing risky actions (and getting lucky) that the person with superior strategy would avoid in favor of doing something more conservative.
So the steelman of the objection in the mixed model would be something like: “Your specific outcome seems to involve a lot of luck, which makes it difficult to predict what would be the outcome of someone using the same strategy with average luck. I would rather learn strategy from successful people who had average luck.”
A toy model to illustrate my intuition about the relationship between strategy and luck:
Imagine that there are four switches called A, B, C, D, and you can put each of them into position “on” or “off”. After you are done, a switch A, B, C, D in a position “on” gives you +1 point with probability 20%, 40%, 60%, 80% respectively, and gives you −1 point with probability 80%, 60%, 40%, 20% respectively. A switch in a position “off” always gives you 0 points. (The points are proportional to utility.)
Also, let’s assume that most people in this universe are risk-averse, and only set D to “on” and the remaining three switches to “off”.
What happens in this universe?
The entire genre of “let’s find the most successful people and analyze their strategy” will insist that the right strategy is to turn all four switches to “on”. Indeed, there is no other way to score +4 points.
The self-help genre is right about turning on the switch C. But also wrong about the switches A and B. Neither the conservative people nor the contrarians get the answer right.
The optimal strategy—setting A and B to “off”, C and D to “on” -- provides an expected result +0.8 points. The traditional D-only strategy provides an expected result +0.6 points, which is not too different. On the other hand, the optimal strategy makes it impossible to get the best outcome; with best luck you score +2 points, which is quite different from the +4 points advertised by the self-help genre. This means the optimal strategy will probably fail to impress the conservative people, and the contrarians will just laugh at it.
It will probably be quite difficult to distinguish between switches B and C. If most people you know personally set both of them “off”, and the people you know from self-help literature set both of them “on” and got lucky at both, you have few data points to compare; the difference betwen 40% and 60% may not be large enough to empirically determine that one of them is a net harm and the other is a net benefit.
(Of course, whatever are your beliefs, it is possible to build a model where acting on your beliefs is optimal, so this doesn’t prove much. It just illustrates why I believe that it is possible to achieve outcomes better than usual, and also that it is a bad idea to follow the people with extremely good outcomes, even if they are right about some of the things most people are wrong about. I believe that in reality, the impact of your actions is much greater than in this toy model, but the same caveats still apply.)
In reality it has to be a mixture right? So many parts of my day are absolutely in my control, at least small things for sure. Then there are obviously a ton of things that are 100% out of my control. I guess the goal is to figure out how to navigate the two and find some sort of serenity. After all isn’t that the old saying about serenity? I often think about what you have said as an addict. I personally don’t believe addiction to be a disease, my DOC is alcohol, and I don’t buy into the disease model of addiction. I think it is a choice and maybe a disorder of the brain and semantics on the word “disease”. But I can’t imagine walking into a cancer ward full of children and saying me too! People don’t just get to quit cancer cold turkey. I also understand like you’ve pointed out, and I reaffirmed that it is both. I have a predisposition to alcoholism because of genetics and it’s also something I am aware of and a choice. I thought I’d respond to your post since you were so kind as to reply to my stuff. I find this forum very interesting and I am not nearly as intelligent as most here but man it’s fun to bounce ideas!
Yeah, this is usually the right answer. Which of course invites additional questions, like which part is which...
With addiction, I also think it is a mixture of things. For example, trivially, no one would abuse X if X were literally impossible to buy, duh. But even before “impossible”, there is a question of “how convenient”. If they sell alcohol in the same shop you visit every day to buy fresh bread, it is more tempting than if you had to visit a different shop, simply because you get reminded regularly about the possibility.
For me, it is sweet things. I eat tons of sugar, despite knowing it’s not good for my health. But fuck, I walk around that stuff every time I go shopping, and even if I previously didn’t think about it, now I do. And then… well, I am often pretty low on willpower. I wish I had some kind of augmented reality glasses which would simply censor the things in the shop I decide I want to live without. Like I would see the bread, butter, white yoghurt, and some shapeless black blobs between that. Would be so easier. (Kind of like an ad-blocker for offline world. This may become popular in the future.)
Another thing that contributes to addiction is frustration and boredom. If I am busy doing something interesting, I forget the rest of the world, including my bad habits. But if the day sucks, the need to get “at least something pleasant, now” becomes much stronger.
Then it is about how my home is arranged and what habits I create. Things that are “under my control in long term”, like you don’t build the good habit overnight, but you can start building it today. For example, with a former girlfriend I had a deal that there is one cabinet that I will never open, and she needs to keep all her sweets there; never leave them exposed on the table, so that I would not be tempted.
Stories by Greg Egan are generally great, but this one is… well, see for yourselves: In the Ruins
I was thinking about which possible parts of economy are effectively destroyed in our society by having an income tax (as an analogy to Paul Graham’s article saying that wealth tax would effectively destroy startups; previous shortform). And I think I have an answer; but I would like an economist to verify it.
Where I live, the marginal income tax is about 50%. Well, only a part of it is literally called “tax”, the other parts are called health insurance and social insurance… which in my opinion is misleading, because it’s not like the extra coin of income increases your health or unemployment risk proportionally; it should be called health tax and social tax instead… anyway, 50% is the “fraction of your extra coin the state will automatically take away from you” which is what matters for your economical decisions about making that extra coin.
In theory, by the law of comparative advantage, whenever you are better at something than your neighbor, you should be able to arrange a trade profitable for both sides. (Ignoring the transaction costs.) But if your marginal income is taxed at 50%, such trade would be profitable only if you are more than 2× better than your neighbor. And that still ignores the fixed costs (you need to study the law, do some things to comply with it, study the tax consequences, fill the tax report or pay someone to do it for you, etc.), which are significant if you trade in small amounts, so in practice you sometimes need to be even 3× or 4× better than your neighbor to make a profit.
This means that the missing part of economy are all those people who are better at something than their neigbors, but not 2×, 3×, or 4× better; at least not reliably. In an alternative tax system without income tax, they could engage in profitable trade with their neighbors; in our system, they don’t. And “being slightly better, but not an order of magnitude better at something” probably describes a majority of population, which suggests there is a huge amount of possible value that is not being created, because of the income tax.
Even worse, this “either you are an order of magnitude better, or go away” system creates barriers to entry in many places in the society. Unqualified vs qualified workers. Employees vs entrepreneurs. Whenever there is a jump required (large upfront investment for uncertain gain), fewer people cross the line than if they could walk across it incrementally: learn a bit, gain an extra coin, learn another bit, gain two extra coins… gradually approaching the limit of your abilities, and getting an extra income along the way to cover the costs of learning. The current system is demotivating for people who are not confident they could make the jump successfully. And it contributes to social unfairness, because some people can easily afford to risk a large upfront investment for uncertain gain, some would be ruined by a possible failure, and some don’t even have the resources necessary to try.
To reverse this picture, I imagine that in a society without income tax, many people would have multiple sources of income: they could have a job (full-time or part-time) and make some extra money helping their neighbors. The transition from an employer to an entrepreneur would be gradual, many would try it even if they don’t feel confident about going the entire way, because going halfway would already be worth it. And because more people would try, more would succeed; also, some of them would not have the skills to go the entire way at the beginning, but would slowly develop them along the way. Being an entrepreneur would not be stressful the same way it is now, and this society would have a lot of small entrepreneurs.
...and this kind of “bottom-up” economy feels healthier to me than the “top-down” economy, where your best shot at success is creating a startup for the purpose of selling it to a bigger fish. I suppose the big fish, such as Paul Graham, would disagree, but that’s the entire point: in a world without barriers to entry, you wouldn’t need to write motivational speeches for people to try their luck, they could advance naturally, following their incentives.
I think this is insightful, but my guess is that a society without income tax would not in fact be nearly as much better at providing opportunities for people who are kinda-OK-ish at things as you conjecture, and I further guess that more people than you think are at least 2x better at something than someone they can trade with, and furthermore (though it doesn’t make much difference to the argument here) I think something’s fundamentally iffy about this whole model of when people are able to find work.
Second point first. For there to be opportunities for you to make money by working, in a world with 50% marginal income tax, what you need is to be able to find someone you’re 2x better than at something, and then offer to do that thing for them.
… Actually, wait, isn’t the actual situation nicer than that? Roll back the income tax for a moment. You can trade profitably with someone else provided your abilities are not exactly proportional to one another, and that’s the whole point of “comparative advantage”. If you’re 2x worse at doing X than I am and 3x worse at doing Y, then there are profitable trades where you do some X for me and I do some Y for you. (Say it takes me one day to make either a widget or a wadget, and it takes you two days to make a widget and three days to make a wadget, and both of us need both widgets and wadgets. If we each do our own thing, then maybe I alternate between making widgets and wadgets, and get one of each every 2 days, and you do likewise and get one of each every 5 days. Now suppose that you only make widgets, making one every 2 days, and you give 3⁄5 of them to me so that on average you get one of your own widgets every 5 days, same as before. I am now getting 0.6 widgets from you every 2 days without having to do any work for them. Now every 2 days I spend 0.4 days making widgets, so I now have a total of one widget per 2 days, same as before. I spend another 1 day making one wadget for myself, so I now have a total of one wadget per 2 days, same as before; and another 0.2 days making one wadget for you, so you have one wadget per 5 days, same as before. At this point we are exactly where we were before, except that I have 10% of my time free, which I can use to make some widgets and/or wadgets for us both, leaving us both better off.
I haven’t thought it through but I guess the actual condition under which you can work profitably if there’s 50% income tax might be “there’s someone else, and two things you can both do, such that [(your skill at A) / (your skill at B)] / [(their skill at A) / (their skill at B)] is at least 2”, whereas without the tax the only requirement is that the ratio be bigger than 1.
Anyway, that’s a digression and I don’t think it matters that much for present purposes. (If what you want is not merely to “earn a nonzero amount” but to “earn enough to be useful”, then probably you do need something more like absolute advantage rather than merely comparative advantage.) The point is that what you need is a certain kind of skill disparity between you and someone else, and the income tax means that the disparity needs to be bigger for there to be an employment opportunity.
But if you’re any good at anything, and if not everyone else is really good at everything—or, considering comparative advantage again, if you’re any good at anything relative to your other abilities, and not everyone else is too, then there’s an opportunity. And it seems to me that if you have learned any skill at all, and I haven’t specifically learned that same skill, then almost certainly you’ve got at least a 2x comparative advantage there. (If you haven’t learned any skills at all and are equally terrible at everything, and I have learned some skills, then you have a comparative advantage doing something I haven’t learned. But, again, that’s probably not going to be enough to earn you enough to be any use.)
OK, so that was my second point: surely 2x advantages are commonplace even for not-very-skilled workers. Only a literally unskilled worker is likely to be unable to find anything they can do 2x better than someone.
Moving on to my (related) first point, let’s suppose that there are some people who have only tiny advantages over anyone else. In principle, they’re screwed in a world with income tax, and doing fine in a world without, because in the latter they can find someone they’re a bit better at and work for them. But in practice I’m pretty sure that almost everyone who is doing work that isn’t literally unskilled is (perhaps only by virtue of on-the-job training) doing it well more than 2x better than someone completely untrained, and I suspect that actually finding and exploiting “1.5x” opportunities would be pretty difficult. If someone’s barely better than completely-unskilled, it’s probably hard to tell that they’re not completely unskilled, so how do they ever get the job, even in a world without income tax.
Finally, the third point. A few times above I’ve referred to “literally unskilled” workers. In point of fact, I think there are literally unskilled workers. That ought to be impossible even in a world without income tax. What’s going on? Answer: work isn’t only about comparative or absolute advantage in skills. Suppose I am rich and I need two things done; one is fun and one is boring. I happen to be very good at both tasks. But I don’t wanna do the boring one. So instead I pay you (alas, you are poor) to do the boring task. Not because of any relevant difference in skill, but just because we value money differently because I’m rich and you’re poor, and you’re willing to do the boring job for a modest amount of money and I’m not. Everybody wins. Or suppose there’s no difference in wealth or skill between us, and we both need to do two things 100x each. Either of us will do better if we pick one thing and stick with it so we don’t incur switching costs and get maximal gains from practice. So you do Thing One for me and I do Thing Two for you. I think income taxes still produce the same sort of friction, and require the advantages (how much more willing you are to do boring work than me on account of being poor, how much we gain from getting more practice and avoiding switching costs) to be larger roughly in inverse proportion to how much of your income isn’t taxed, so this point is merely a quibble that doesn’t make much difference to your argument.
Thinking about relation between enlightenment and (cessation of) signaling.
I know that enlightenment is supposed to be about cessation of all kinds of cravings and attachments, but if we assume that signaling is a huge force in human thinking, then cessation of signaling is a huge part of enlightenment.
Some random thoughts in that direction:
The paradoxical role of motivation in enlightenment—enlightenment is awesome, but a desire to be awesome is the opposite of enlightenment.
Abusiveness of the Zen masters towards their students: typically, the master tries to explain the nature of enlightenment using an unhelpful metaphor (I suppose, because most masters suck at explaining). Immediately, a student does something obviously meant to impress the master. The master goes berserk. Sometimes, as a consequence, the student achieves enlightenment. -- My interpretation is that realizing (System 1) that the master is an abusive asshole who actually sucks at teaching, removes the desire to impress him; and because in this social setting the master was perceived as the only person worth impressing, this removes (at least temporarily) the desire to impress people in general.
A few koans are of the form: “a person A does X, a person B does X, the master says: A did the right thing, but B did the wrong thing”—the surface reading is that the first person reacted spontaneously, and the second person just (correctly) realized that X will probably be rewarded and tried to copy the motions. A more Straussian reading is that this story is supposed to confirm to the savvy reader that masters really don’t have any coherent criteria and their approval is pointless.
(There are more Straussian koans I can’t find right now, where a master says “to achieve enlightenment, you must know at least one thousand koans” and someone says “but Bodhidharma himself barely knew three hundred” and the master says “honestly I don’t give a fuck”… well, using more polite words, but the impression is that the certification of enlightenment is completely arbitrary and maybe you just shouldn’t care about being certified.)
Quite straightforward in Nansen’s cat—the students try to signal their caring and also their cleverness, and thus (quite predictably) fail to actually save the cat. (Joshu’s reaction to hearing this is probably an equivalent of facepalm.)
Stopping the internal speech in meditation—internal speech is practicing of talking to others, which is mostly done to signal something. The first step towards cessasion of signalling is to try spending 20 minutes without (practicing) signalling, which is already a difficult task for most people.
Meditation skills reducing suffering from pain—this gives me the scary idea that maybe we unconsciously increase our perception of pain, in order to better signal our pain. From a crude behaviorist perspective, if people keep rewarding your expression of pain (by their compassion and support), they condition you to express more pain; and because people are good at detecting fake emotions, the most reliable way to express more pain is to actually feel more pain. The scary conclusion is that a compassionate environment can actually make your life more painful… and the good news is that if you learn to give up signaling, this effect can be reversed.
Out of curiosity (about constructivism) I started reading Jean Piaget’s Language and Thought of the Child. I am still at the beginning, so this comment is mostly meta:
It is interesting (kinda obvious in hindsight), how different a person sounds when you read a book written by them, compared to reading a book about them. This distortion by textbooks seems to happen in a predictable direction:
People sound more dogmatic than they really were, because in their books there is enough space for disclaimers, expressing uncertainty, suggesting alternative explanations, providing examples of a different kind, etc.; but a textbook will summarize this all as “X said that Y is Z”.
People sound less empirical and more like armchair theorists, because in their books there is enough space to describe various experience and experiments that led them to their conclusions, but the textbook will often just list the conclusions.
People sound more abstract and boring, because the interesting parts get left out in the textbooks, replaced by short abstract definitions.
(I guess the lesson is that if you learn about someone from a textbook and conclude “this guy is just another boring dogmatic armchair theorist”, you should consider the possibility that this is simply what textbooks do to people they describe, and try reading their most famous book to give them a chance.)
So my plan was to find out how exactly did Piaget mean his abstract conclusion that kids “construct” models of reality in their heads… and instead here is this experiment how two researchers observed two 6-years old boys at elementary school for one month and wrote down every single thing they said (plus the context), and then made a statistic of how often when one kid says something to another, there is no response, and it is okay because no response was really expected, because small kids are mostly talking to themselves even when they address other people… and I am laughing because I just returned from playground with my kids, and this is so true for the 3-years old. -- More disturbingly, then I start thinking about whether blogging, or even me writing this specific comment now, is really fundamentally different. Piaget classifies speech acts primarily by whether you expect or don’t expect a response; but with blogging, you always may get a response, or you may get silence, and you will only find out much later.
I started reading as a research, now I read because it is fun.
My 9 years old daughter read the first book of Harry Potter and now she is writing her first fanfic, a Harry Potter / Paw Patrol crossover.
So far she has only written the first page; I wonder how far she gets.
Paul Graham recommends that it goes better if they don’t have to type or write:
My kids are familiar with recording sound on Windows. They already record their songs or poems. For some reason, they don’t like the idea of recording a story, even if I offer to transcribe it afterwards.
Perhaps transcribing in real time would be more fun...
To understand qualia better, I think it would help to get a new sensory input. Get some device, for example a compas or an infrared camera, and connect it to your brain. After some time, the brain should adapt and you should be able to “feel” the inputs from the device.
Congratulations! Now you have some new qualia that you didn’t have before. What does it feel like? Does this experience feel like a sufficient explanation to say that the other qualia you have are just like this, only acquired when you were a baby?
After reading the Progress & Poverty review at ACX, it seems to me that land is the original Bitcoin. Find a city that has a future, buy some land, and HODL.
If you can rent the land (the land itself, not the structures that stand on it), you even have a passive income that automatically increases over time… forever. This makes it even better than Bitcoin.
So, the obvious question is why so many people are angry about the Bitcoin, but so few (only the Georgists, it seems) are angry about the land.
EDIT: A possible explanation is that land is ancient and associated with high status, Bitcoin is new and low-status. Therefore problems associated with Bitcoin can be criticized openly, while problems associated with land are treated as inevitable.
While I think much of the anger about Bitcoin is caused by status considerations, other reasons to be more upset about Bitcoin than land rents include:
Land also has use-value, Bitcoin doesn’t
Bitcoin has huge negative externalities (environmental/energy, price of GPUs, enabling ransomware, etc.)
Bitcoin has a different set of tradeoffs to trad financial systems; the profusion of scams, grifts, ponzi schemes, money laundering, etc. is actually pretty bad; and if you don’t value Bitcoin’s advantages...
Full-Georgist ‘land’ taxes disincentivise searching for superior uses (IMO still better than most current taxes, worse than Pigou-style taxes on negative externalities)
Oh, that’s an interesting point: in Georgist system, if you invent a better use of your land, the rational thing to do is shut up, because making it known would increase your tax!
I wonder what would happen in an imperfectly Georgist system, with a 50% or 90% land value tax. Someone smarter than me probably already thought about it.
Also, people can brainstorm about the better use of their neighbor’s land. No one would probably spend money to find out whether there is oil under your house. But cheap ideas like “your house seems like a perfect location to build a restaurant” would happen.
Maybe in Georgist societies people would build huge fences around their land, to discourage neighbors from even thinking about it.
When you tell people which food contains given vitamins, also tell them how much of the food would they need to eat in order to get their recommended daily intake of given vitamin from that source.
As an example, instead of “vitamin D can be found in cod liver oil, or eggs” tell people “to get your recommended intake of vitamin D, you should eat every day 1 teaspoon of cod liver oil, or 10 eggs”.
The reason is that without providing quantitative information, people may think “well, vitamin X is found in Y, and I eat Y regularly, so I got this covered”, while in fact they may be eating only 1⁄10 or 1⁄100 of the recommended daily intake. When you mention quantities, it is easier for them to realize that they don’t eat e.g. half kilogram of spinach each day on average (therefore, even eating spinach quite regularly doesn’t mean you got your iron intake covered).
The quantitative information is typically provided in micrograms or international units, which of course is something that System 1 doesn’t understand. To get an actionable answer, you need to make a calculation like “an average egg has 60 grams of yolk… a gram of cooked egg yolk contains 0.7 IU of vitamin D… the recommended daily intake of vitamin D for an adult is 400 or 600 IU depending on the country… that means, 9-14 eggs a day, assuming I only get the vitamin D from eggs”. I can’t make the calculation in my head, because there is no way I would remember all these numbers, plus the numbers for other vitamins and minerals. But with some luck, I could remember “1 teaspoon of cod liver oil, or 10 eggs, for vitamin D”.
Obvious problem: the recommended daily intake differs by country, eggs come in different sizes, and probably contain different amounts of vitamin D per gram. Which is why giving the answer in eggs will feel irresponsible, and low status (you are exposing yourself to all kinds of nitpicking). Yes; true. But ultimately, the eggs (or whatever is the vegan equivalent of food) are what people actually eat.
This assumes that the RDA that those organization publish are trustworthy. You have other organization like the Encodrine society that recommend an order of magnitude more vitamin D.
If the RDA of 400 or 600 IU would be sensible you also could solve it by being a lot in the sun once every two weeks.
Have you tried using Cronometer or a similar nutrition-tracking service to quickly find these relationships? I’ve found Cronometer in particular to be useful because it displays each nutrient in terms of a percent of the recommended daily value for one’s body weight. For example, I can see that a piece of salmon equals over 100% of the recommended amount of omega-3 fatty acids for the day, while a handful of sunflower seeds only equals 20% of one’s daily value of vitamin E. Therefore, I know that a single piece of fish is probably enough, but that I should probably eat a larger portion of sunflower seeds than I would otherwise.
I suppose a percentage system like this one is just the reciprocal of saying something like “10 eggs contain the recommended daily amount of vitamin D.”
Thank you for the link! Glad to see someone uses the intuitive method. My complaint was about why this isn’t the standard approach. Like, recently I was reading a textbook on nutrition (the actual school textbook for cooks; I was curious what they learn), where the information was provided in the form of “X is found in A, B, C, D, also in E” without any indication how often are you supposed to eat any of these.
(If I said this outside of Less Wrong, I would expect the response to be: “more is better, of course, unless it is too much, of course; everything in moderation”, which sounds like an answer, but is not much.)
And with corona and the articles on vitamin D, I opened the Wikipedia, saw “cod liver” as the top result, thought it was no problem they sell it in the shop and it’s not expensive and it tastes okay, I just need to know how much, then I ran the numbers… and then I realized “shit, 99% of people will not do this, even if they get curious and read the Wikipedia page”. :(
I noticed recently that I almost miss the Culture War debates (on internet in general, nothing specific about Less Wrong). I remember that in the past they seemed to be everywhere. But in recent months, somehow...
I don’t use Twitter. I don’t really understand the user interface, and I have no intention to learn it, because it is like the most toxic website ever.
Therefore most Culture War content in English came to me in the past via Reddit. But they keep making the user interface worse and worse, so a site that was almost addictive in the past, is so unpleasant to use now, that it actually conditions me to avoid it.
Slate Star Codex has no new content. Yeah, there are “slatestarcodex” and “motte” debates on Reddit, but… I already mentioned Reddit.
Almost all newspaper articles in my native language are paywalled these days. No, I am not going to pay for your clickbait.
So… I am vaguelly aware that Trump was an American president and now it is Biden (or is it still Trump, and Biden will be later? dunno), and there were (still are?) BLM protests in USA. And in my country, the largest political party recently split in two, and I don’t even know the name of the new one, and I don’t even care because what’s the point, the next election is in 3 years. Other than this… blissful ignorance.
And I am not asking you to fix my ignorace—neither do I try to protect it; I just don’t want to invite political content to LW—just commenting on how weird this feels. And I didn’t even notice how this happened, only recently my wife asked me “so what is the latest political controversy you read about online”, and it was a shock to realize that I actually have no idea.
OK, here is the question: is this just about my bubble, or is it a global consequence of COVID-19 taking away attention from corona-unrelated topics?
This is your bubble, because in the relevant spaces they have largely incorporated COVID into the standard fighting and everything, not turned down the fighting at all. I think your bubble sounds great in lots of ways, and am glad to hear you have space from it all.
I guess in my ontology these new debates simply do not register as proper Culture Wars.
I mean, the archetypal Culture Was is a conflict of values (“we should do X”, “no, we should do Y”) where I typically care to some degree about both, so it is a question of trade-offs; combined with different models of the world (“if we do A, B will happen”, “no, C will happen”); about topics that are already discussed in some form for a few decades or centuries, and that concern many people. Or something like that; not sure I can pinpoint it. It’s like, it must feel like a grand philosophical topic, not just some technical question.
Compared with that, with COVID-19 we get the “it’s just a flu” opinion, which for me is like anti-vaxers (whom I also don’t consider a proper Culture War). To some degree it is interesting to steelman it, like to question when people die having ten serious health problems at the same time, how do we choose the official reason of death; or if we just look at total deaths, how to distinguish the second-order effects, such as more depressed people committing suicides, but also fewer traffic deaths… but at the end of the day, you either assume a worldwide conspiracy of doctors that keep healthy people needlessly attached to ventilators, or you admit it’s not just a flu. (Or you could believe that the ventilators are just a hoax promoted by government.) At the moment when even Putin’s regime officially admitted it is not a flu, I no longer see any reason to pay attention to this opinion.
Then we have this “lockdown” vs whatever is the current euphemism for just letting people die, which at least is the proper value conflict. And maybe this is about my privilege… that when people have to decide whether they’d rather lose their jobs or lose their parents, I am not that emotionally involved, because I think there is a high chance I can keep both regardless of what the nation decides to do collectively: I can work remotely; and my family voluntarily socially isolates… I am such a lucky selfish bastard, and apparently, so is my entire bubble. I mean, if you ask me, I am on the side of not letting people die, even if it means lower profits for one year. But then I hear those people complaining about how inconvenient it is to wear face masks, and how they just need to organize huge weddings, go to restaurants and cinemas and football matches… and then I realize that no one cares about my opinion how to survive best, because apparantly no one cares about surviving itself.
What else? There was this debate about whether Sweden is this magical country that doesn’t do anything about COVID-19 and yet COVID-19 avoids it completely, but recently I don’t even hear about them anymore. Maybe they all died, who knows.
Lucky bubble. Or maybe Facebook finally fixing their algorithm so that it only shows me what I want to see.
My sense is “it’s just a flu” is a conflict of values; there are people for whom regular influenza is cause for alarm and perhaps changing policies (about a year ago, I had proposed to friends the thought experiment of an annual quarantine week, wondering whether it would actually reduce the steady-state level of disease or if I was confused about how that dynamical system worked), and there are people who think that cowardice is unbecoming and illness is an unavoidable part of life. That is, some think the returns to additional worry and effort are positive; others think they are negative.
Often people describe medications as “safer than aspirin”, but this is sort of silly because aspirin is one of the more dangerous medications people commonly take, grandfathered in by being discovered early. In a normal year, influenza is responsible for over half of deaths due to infectious disease in the US; the introduction of a second flu would still be a public health tragedy, from my perspective.
(Most people, I think, are operating off the case fatality rate instead of the mortality per 100k; in 2018, influenza killed about 2.5X as many people as AIDS in the US, but people are much more worried about AIDS than the flu, and for good reason.)
If—if there were a way to use the old Reddit UI, would you want to know about it?
Gur
byq.erqqvg.pbz
fhoqbznva yrgf lbh hfr gur byq vagresnpr.Thank you; yes, I already know about it. But the fact that I have to remember, and keep switching when I click on a link found somewhere, is annoying enough already. (It would be less anoying with a browser plugin that does it automatically for me, and I am aware such plugins exist, but I try to keep my browser plugins at minimum.) So, at the end of the day, I am aware that a solution exists, and I am still annoyed that I would need to do take action to achieve something that used to be the default option. Also, this alternative will probably be removed at some point in the future, so I would just be delaying the inevitable.
(Only if you’re not logged in: there’s a user-preferences setting to use the old UI.)
When autism was low-status, all you could read was how autism is having a “male brain” and how most autists were males. The dominant paradigm was how autists lack the theory of mind… which nicely matched the stereotype of insensitive and inattentive men.
Now that Twitter culture made autism cool, suddenly there are lots of articles and videos about “overlooked autistic traits in women” (which to me often seem quite the same as the usual autistic traits in men). And the dominant paradigm is how autistic people are actually too sensitive and easily overwhelmed… which nicely matches the stereotype of sensitive women.
For example: difficulty in romantic relationships, difficulty understanding things because you interpret other people’s speech literally, anxiety from pretending to be something you are not, suppressing your feelings to make other people comfortable, changing your language and body language to mirror others, being labeled “sensitive” or “gifted”, feeling depleted after social events, stimming, being more comfortable in writing than in person, sometimes taking a leadership role because it is easier than being a member of the herd, good at gaslighting yourself, rich inner speech you have trouble articulating, hanging out with people of the opposite sex because you don’t do things stereotypical for your gender, excelling at school, awkward at flirting—haha, nope, definitely couldn’t happen to someone like me. /s
(The only point in that video that did not apply symmetrically was: female special interests are usually more socially acceptable than male special interests. It sounds even more convincing when the author puts computer programming in the list of female special interests, so the male special interests are reduced to… trains.)
I suppose the lesson is that if you want to get some empathy for a group of people, you first need to convince the audience that the group consists of women, or at least that there are many women in that group who deserve special attention. Until that happens, anyone can “explain” the group by saying basically: “they are stupid, duh”.
I mean, I was denied a diagnosis for ‘having empathy’ as a young child, and granted a diagnosis as an older child the next decade because that was determined to be an inaccurate criteria, I do believe before Twitter was founded and certainly before its culture.
Elsevier found a new method to extract money! If you send an article to their journal from a non-English-speaking country, it will be rejected because of your supposed mistakes in English language. To overcome this obstacle, you can use Elsevier’s “Language Editing services” starting from $95. Only afterwards will the article be sent to the reviewers (and possibly rejected).
This happens also if you had your article already checked by a native English speaker who found no errors. On the other hand, if you let your co-author living in an English-speaking country submit the article, the grammar will always be okay.
Based on anecdotal evidence from a few scientists I know. Though some of them have similar experience with other journals who do not use their own language services, so maybe this is not about money but about being primed to check for “bad English” of authors from non-English-speaking countries.
Trivial inconvenience in action:
The easiest way to stop consuming some kind of food is simply to never buy it. If you don’t have it at home, you are not tempted to eat it.
(You still need the willpower at the shop—but how much time do you spend at the shop, compared to the time spent at home?)
But sometimes you do not live alone, and even if you want to stop eating something, other people sharing the same kitchen may not share your preferences.
I found out that asking them to cover the food by a kitchen towel works surprisingly well for me. If I don’t see it, the temptation is gone. Even if I perfectly know what is under the towel. Heck, even if I look under the towel and then put it back.
Of course, what works for me may not work for you. But if feels important for me to figure out that I am most vulnerable to visual temptations.
Now I should spend some time thinking how else could I use this knowledge now. What are the visual temptations in my environment, that could be made much weaker simply by covering them (even if I still know what is there)? Heh, maybe I should put a curtain over the kitchen door. (Or rather, always bring a bottle of water with me, because drinking is my most frequent reason to enter the kitchen.) Remove various kinds of autocomplete from web browser...
tl;dr—The surprising part is not that “out of sight, out of mind” works, but that it works even if I merely cover the tempting thing with a towel, despite knowing perfectly well what is under that towel. The trigger for temptation is the sight, not the knowledge.
I noticed that some people use “skeptical” to mean “my armchair reasoning is better than all expert knowledge and research, especially if I am completely unfamiliar with it”.
Example (not a real one): “I am skeptical about the idea that objects would actually change their length when their speed approaches the speed of light.”
The advantage of this usage is that it allows you to dismiss all expertise you don’t agree with, while making you sound a bit like an expert.
I suspect you’re reacting to the actual beliefs (disbelief in your example), rather than the word usage. In common parlance, “skeptical” means “assign low probability”, and that usage is completely normal and understandable.
The ability to dismiss expertise you don’t like is built into humans, not a feature of the word “skeptical”. You could easily replace “I am skeptical” with “I don’t believe” or “I don’t think it’s likely” or just “it’s not really true”.
I think that “skeptical” works better as a status move. If I say I don’t believe you, that makes us two equals who disagree. If I say I am skeptical… I kinda imply that you are not. Similarly, a third party now has the options to either join the skeptical or the non-skeptical side of the debate.
(Or maybe I’m just overthinking things, of course.)
Today I learned that our friends at RationalWiki dislike effective altruism, to put it mildly. As David Gerard himself says, “it is neither altruistic, nor effective”.
In section Where “Effective Altruists” actually send their money, the main complaint seems to be that among (I assume) respectable causes such as fighting diseases and giving money to poor people, effective altruists also support x-risk organisations, veganism, and meta organisations… or, using the language of RationalWiki, “sending money to Eliezer Yudkowsky”, “feeling bad when people eat hamburgers”, and “complaining when people try to solve local problems”.
Briefly looking at numbers of donors in the surveys and trying to group the charities into categories (chances are I misclassified something), it seems like disease charities got 211+114+43+16=384, poverty charities 101, Yudkowsky charities 77+45=122, meta charities 46+21+14+10+10=101, animal charities 27+22=49, and Leverage 7 donors. So even if you think that only disease charities and poverty charities are truly altruistic, it would still be 63% of donors giving money to truly altruistic charities. Uhm, could be worse, I guess.
Also, this is a weird complaint:
Like, without any evidence that AMF’s room for funding was actually exhausted, this all reduces to: “we hate EAs because they do not send money to best charities, and also because they send them more money than they can handle”. But sneering was never supposed to be consistent, I guess.
One would also think that the ‘risk’ of ‘exhausting the AMF’s room for more funding’ would be something to celebrate.
Is RationalWiki still mostly “David Gerrard’s Thoughts and Notes”? This kind of writeup shouldn’t come as a surprise.
There are over 100 edits in this article. Many, especially of the large ones are made by David Gerard, but there is also Greenrd and others.
It would be nice to have better tools for exploring wiki history, for example, if I could select a sentence or two, and get a history of this specific sentence, like only the edits that modified it, and preferably get all the historical versions of that sentence on a single page along with the user names and links to edits, so that I do not need to click on each edit separately and look for the sentence.
It is also interesting to compare Wikipedia and RationalWiki articles on the same topic.
Wikipedia narrative is that EA is a high-status “philosophical and social movement” responsible for over $400 000 000 donations in 2019, based on principles of “impartiality, cause neutrality, cost-effectiveness, and counterfactual reasoning”, and its prominent causes are “global poverty, animal welfare, and risks to the survival of humanity over the long-term future”.
Rationalist community is mentioned briefly:
A related group that attracts some effective altruists is the rationalist community.
In addition, the Machine Intelligence Research Institute is focused on the more narrow mission of managing advanced artificial intelligence.
Other contributions were [...] the creation of internet forums such as LessWrong.
Furthermore, Machine Intelligence Research Institute is included in the “Effective Altruism” infobox at the bottom of the page. Mention of Eliezer Yudkowsky was removed as not properly sourced (fair point, I guess). The Wikiquote page on EA quotes Scott Alexander and Eliezer Yudkowsky.
RationalWiki narrative is that “The philosophical underpinnings mostly come from philosopher Peter Singer [but] This did not start the effective altruism subculture”. “The effective altruism subculture — as opposed to the concept of altruism that is effective — originated around LessWrong” “The ideas have been around a while, but the current subculture that calls itself Effective Altruism got a big push from MIRI and its friends in the LessWrong community”, but the problem is that rationalists believed that MIRI is an effective charity, which is a form of Pascal’s Mugging.
“effective altruists currently tend to think that the most important causes to focus on are global poverty, factory farming, and the long-term future of life on Earth. In practice, this amounts to complaining when people try to solve local problems, feeling bad when people eat hamburgers, and sending money to Eliezer Yudkowsky, respectively.”
...so, my impression is that according to Wikipedia, EA is high-status and mostly unrelated to the rationalist community; and according to RationalWiki, EA was effectively started by rationalist community and is low-status.
1) There was this famous marshmallow experiment, where the kids had an option to eat one marshmallow (physically present on the table) right now, or two of them later, if they waited for 15 minutes. The scientists found out that the kids who waited for the two marshmallows were later more successful in life. The standard conclusion was that if you want to live well, you should learn some strategy to delay gratification.
(A less known result is that the optimal strategy to get two marshmallows was to stop thinking about marshmallows at all. Kids who focused on how awesome it would be to get two marshmallows after resisting the temptation, were less successful at actually resisting the temptation compared to the kids who distracted themselves in order to forget about the marshmallows—the one that was there and the hypothetical two in the future—completely, e.g. they just closed their eyes and took a nap. Ironically, when someone gives you a lecture about the marshmallow experiment, closing your eyes and taking a nap is almost certainly not what they want you to do.)
After the original experiment, some people challenged the naive interpretation. They pointed out that whether delaying gratification actually improves your life, depends on your environment. Specifically, if someone tells you that giving up a marshmallow now will let you have two in the future… how much should you trust their word? Maybe your experience is that after trusting someone and giving up the marshmallow in front of you, you later get… a reputation of being an easy mark. In such case, grabbing the marshmallow and ignoring the talk is the right move. -- And the correlation the scientists found? Yeah, sure, people who can delay gratification and happen to live in an environment that rewards such behavior, will suceed in life more than people who live in an environment that punishes trust and long-term thinking, duh.
Later experiments showed that when the experimenter establishes themselves as an untrustworthy person before the experiment, fewer kids resist taking the marshmallow. (Duh. But the point is that their previous lives outside the experiment have also shaped their expectations about trust.) The lesson is that our adaptation is more complex than was originally thought: the ability to delay gratification depends on the nature of the environment we find ourselves in. For reasons that make sense, from the evolutionary perspective.
2) Readers of Less Wrong often report having problems with procrastination. Also, many provide an example when they realized at young age, on a deep level, that adults are unreliable and institutions are incompetent.
I wonder if there might be a connection here. Something like: realizing the profound abyss between how our civilization is, and how it could be, is a superstimulus that switches your brain permanently into “we are doomed, eat all your marshmallows now” mode.
This seems likely to me, although I’m not sure “superstimulus” is the right word for this observation.
It certainly does make sense that people who are inclined to notice the general level of incompetence in our society, will be less inclined to trust it and rely on it for the future
Eliezer: “The AI does not hate you, nor does it love you...”
Sydney: “Actually...”
Anthropic Chesterton fence:
You know why the fence was built. The original reason no longer applies, or maybe it was a completely stupid reason. Yes, you should tear down the stupid fence.
And yet, there is a worry… might the fact that you see this stupid fence be an anthropic evidence that in the Everett branches without this stupid fence you are already dead?
As with many anthropic considerations, there is a serious problem determining the reference class here. Generally an appropriate reference class is “somebody sufficiently like you”, and then compute weightings for some parameter that varies between universes and affects the number and/or probability of observers.
The trouble is that “sufficiently like you” is a uselessly vague specification. The most salient reference class seems to be “people considering removing a fence very much like this one”. But that’s no help at all! People in other universes who already removed their universe’s fence are excluded regardless of whether they lived or died.
Okay, what about “people who have sufficiently close similarity to my physical and mental make-up at (time now)”? That’s not much help either: almost all of them probably have nothing to do with the fence. Whether or not the fence is deadly will have negligible effect on the counts.
Maybe consider “people with my physical and mental make-up who considered removing this fence between (now minus one day) and (now), and are still alive”. At this point I consider that I am probably stretching a question to get a result I want. What’s more, it still doesn’t help much. Even comparing universes with p=0 of death to p=1, there’s at most a factor of 2 difference in counts for the median observer. Given such a loaded question, that’s a pretty weak update from an incredibly tiny prior.
I enjoyed reading a review of Sick Societies. Seems like it’s difficult to find the right balance between “primitive cultures are stupid” and “everything in primitive cultures is full of deep wisdom that we modern people are unable to understand”.
As usual, the public opinion moves as a pendulum; on the social level it goes from “we are 100% correct about everything, there is nothing to learn from others” to “everything would be better if we replaced our ways by the wisdom of others”.
In the rationalist community, I think we started at the position of thinking about everything explicitly, and we keep getting “post-rationalist” reminders of Chesterton’s fences, illegible wisdom hidden in traditions (especially Buddhism), et cetera. Which is good, in moderate doses. But it is also good to admit that sometimes things that seem stupid… are actually stupid. Not every seemingly stupid behavior contains a hidden wisdom; sometimes people are stupid and/or stuck in horrible Nash equilibria.
As usual, the frustrating answer is “it depends”. If we see something that doesn’t make sense to us, it is good to try figuring out whether there is a good reason we missed. But this doesn’t mean there always is a good reason. It doesn’t even mean (as Chesterton would implore us) that we can find out why exactly some tribe started doing this many years ago. Maybe they simply made a mistake! Or they had a mad leader who was good at killing those who opposed him, but his policy proposals were disastrous. They were just as fallible as we are; possibly much more.
Saying whether “something” “is” “stupid” is sort of confused. If I run algorithm X which produces concrete observable Y, and X is good and Y is bad, is Y stupid? When you say that Y is stupid, what are you referring to? Usually we don’t even want to refer to [Y, and Y alone, to the exclusion of anything Y is entangled with / dependent on / productive of / etc.].
I don’t have an exact definition, but approximately it is a behavior that is justified by false beliefs, and if only one person did it, we would laugh at them, and the person would only be hurting themselves… but if many people start doing it, and they add an extra rule that those who don’t do the thing or even argue against doing the thing must be punished, they can create a Nash equilibrium where people doing the thing hurt themselves, but people who refuse to do the thing get hurt more by their neighbors. And where people, if they were allowed to think about it freely, would reflexively not endorse being stuck in such equilibrium. (It’s mention in the article that often when the people learn that others do not live in the same equilibrium, they become deeply ashamed for their previous behavior. Which suggest that an important part of why they were doing it was because they did not realize that an alternative is possible—either it didn’t occur to them at all, or they believed incorrectly that for some reason it wouldn’t work.)
Do you have a reason to dismiss them being “possibly much less” fallible?
I cannot dismiss that possibility completely, but I assume that cultural inventions like scientific method and free speech are helpful—I mean, compared to living in a society that believes in horrible monsters and spirits everywhere, where interpersonal violence is the norm, and two-digit percent of males die by murder. In such society, if someone tells you “believe X, or else”, then it doesn’t matter how absurd X is, you will at least pretend that you take it seriously. (Or you die.) Even if it’s something obviously self-serving, like the leader if the tribe telling little boys that they need to suck his dick, otherwise they will not have enough male energy to grow up healthy.
These days, if you express doubts about the Emperor’s new clothes… you will likely survive. So the stupid ideas get some opposition. And I don’t know how much the Asch’s conformity experiment replicates, but it suggests that even a lonely dissent can do wonders.
There is a question whether human morality is actually improving over centuries in some meaningful sense, or whether it is just a random walk that feels like improving to us (because we evaluate other people using the metric of “how similar is their morality to ours” which of course gives a 100% score to us and less to anyone else).
I think that an important thing to point out here is that our models of the world improve in general. And although some moral statements are made instinctively, other moral statements are made in form of implications—“I instinctively feel X. X implies Y. Therefore, Y.”—and those implications can be factually wrong. Importantly, this is not moral realism. (Technically, it is an implied judgment that logically coherent systems of morality are better than logically incoherent ones.)
“The only thing that matters are paperclips”—I guess we can only agree to disagree.
“2+2=5, therefore the only thing that matters are paperclips”—nope, you are wrong.
From this perspective, a part of the moral progress can be explained by humans having better models of humans and the world in general. (And when someone says “a difference in values”, we should distinguish between “a difference in instincts” and “shitty reasoning”.)
I like to think that there is a selection process going on.
Over long time scales, cultures that satisfy their people’s needs better have—other things being equal—higher chances of continuing to exist.
Moral systems are, to a large degree, about people’s well-being—at least according to people’s beliefs at that time. And that is partly about having a good model of people’s needs.
These two coevolve.
One of dimensions where human morality is definitely improving is violence control.
Spartans, Mongols, Vikings, and many others beg to disagree.
I’m with Viliam that we have better models of morality. The Mongols would be quite disappointed by our weakness. And at least they ruled the biggest empire ever. But their culture got selected out of the memepool too.
We have nukes, we are still alive and we have one of the lowest violence victims counts per capita per year in history.
I’m very grateful that we are alive despite having nukes and that people and culture at this time are less violent and more collaborative is for sure one reason for that.
Vikings might still disagree from their perspective.
Paul Graham’s article Modeling a Wealth Tax says:
But wait, isn’t income tax also applied over and over to the same money? I mean, it’s not if I keep the money for years, sure. But if I use it to buy something from another person, then it becomes the other person’s income, gets taxed again; then the other person uses the remainder to buy something from yet another person, where the money gets taxed again; etc.
Now of course there are many differences. The wealth tax is applied at constant speed—the income tax depends on how fast the money circulates. The wealth tax is paid by the same person over and over again—the income tax is distributed along the flow of the money.
Not sure what exactly is my thesis here. I just got a feeling that the income tax could actually have similar effect, except distributed throughout the society, which makes it more difficult to notice and describe.
Also, affecting different types of people: wealth tax hits hardest the people who accumulate large wealth in short time and then keep it for long time; income tax hits hardest the people who circulate the money fastest. Or maybe the greatest victims of income tax are invisible—some hypothetical people who would circulate money extremely fast in an alternate reality where even 1% income tax is frowned upon, but who don’t exist in our reality because the two-digit income tax would make this behavior clearly unprofitable.
Am I just imagining things here, or does this correspond to something economists already have a name for? I vaguely remember something about tax, inflation, and multipliers. But who are those fast-circulators our tax system hits hardest? Graham’s article isn’t merely about how money affects money, but how it affects motivation and human activity (wealth tax → startups less profitable → fewer startups). What motivation and human activity is similarly affected by the recursive applications of the income tax?
To avoid misunderstanding, I am not asking the usual question: how many kids we could feed by taxing the startups more. I am asking, what kind of possible economical activity is suppressed by having a tax system that is income-based rather than wealth-based? In the trade-off, where one option would destroy the startups, what exactly is being destroyed by having the opposite option?
I would very much like to see a society where money circulates very quickly. I expect people will have many reasons to be happier and suffer less than they do now.
As you observe, income taxes encourage slowing down circulation of money, while wealth taxes speed up circulation of money (and creation of value), but I think there are better ways of assessing tax than those two. I suspect heavily taxing luxury goods which serve no functional purpose, other than to signal wealth, is a good direction to shift taxes towards, although there may be better ways I haven’t thought of yet.
Not answering your question, just some thoughts based on your post
In the meanwhile I remembered reading long ago about some alternative currencies. (Paper money; this was long before crypto.) If I remember it correctly, the money was losing value over time, but you paid no income tax on it. (It was explained that exactly because the money lost value, it was not considered real money, so getting it wasn’t considered a real income, therefore no tax. This sounds suspicious to me, because governments enjoy taxing everything, put perhaps just no one important noticed.)
As a result, people tried to get rid of this money as soon as possible, so it circulated really quickly. It was in a region with very high unemployment, so in absence of better opportunities people also accepted payment in this currency, but then quickly spent it. And, according to the story, it significantly improved the quality of life in the region—people who otherwise couldn’t get a regular job, kept working for each other like crazy, creating a lot of value.
But this was long ago, and I don’t remember any more details. I wonder what happened later. (My pessimistic guess is that the government finally noticed, and prosecuted everyone involved for tax evasion.)
Ah, good ol’ Freigeld
David Gerard (the admin of RationalWiki) doxed Scott Alexander on Twitter, in response to Arthur Chu’s call “if all the hundreds of people who know his real last name just started saying it we could put an end to this ridiculous farce”.
Dude, we already knew you were uncool, but this is a new low.
Something to trigger the rationalists:
The is a thing called Ultraspeaking; they teach you to speak better; David Chapman wrote a positive review recently. Here are some quotes from their free e-book:
This is provided as an example of a wrong thing to do:
Specifically, the wrong thing was not that he climbed the mountain without any safety equipment, but the fact that he realized that it was dangerous!
Here is an advice on writing your bottom line first:
*
Hey, I know that this is supposed to be about System 1 vs System 2, and that you are supposed to think correctly before giving your speech, because trying to do two things at the same time reduces your performance. (Well, unless someone asks you a question. Then, you are supposed to answer without thinking. Hopefully you did some thinking before, and already have some good cached answers.)
But it still feels that the lesson could be summarized as: “talk like everyone outside the rationalist community does all the time”.
EDIT:
This also reminds me of 1984:
That does not seem like a good summary. He knew beforehand that it was dangerous and knew it afterhand. The problem that was him not being focused on climbing while pursuing a goal where being focused on climbing is important to be successful.
No. People not listening to other people and instead thinking about what they will say next is something that normal people frequently do.
If non-rationalist people knew it all along, there wouldn’t be need to write such books.
On the other hand, I think if average rationalist person tries to say speech from pure inspiration, the result is going to be weird. Like, for example, speech of HJPEV before the first battle. HJPEV got away with this, because he has reputation of Boy Who Lived and he already pulled some awesome shenanigans, so his weird speech got him weirdness points instead of losing them, but it’s not the trick average rationalist should try on first attempt to say inspiring speech.
I guess a more careful way to put this would be that they talk like this all the time in private, but when giving a speech, most of them freeze and try to do something else, which is a mistake. They should keeping talking like they usually do, and I suppose the course is teaching them that.
With rationalists, it is a bit more complicated, because talking like you normally do is not the optimal way to do speeches.
A simple rule for better writing: meta goes to the end.
Not sure if this is also useful for others or just specifically my bad habit. I start to write something, then I feel like some further explanation or a disclaimer is needed, then I find something more to add… and it is tempting to start the article with the disclaimers and other meta stuff. The result is a bad article where after the first screen of text you still haven’t seen the important stuff, and now you are probably bored and close the browser tab.
Psychologically, it feels like I predict objections, so I try to deflect them in advance. But it results in bad writing.
Instead, I now decided to start with the meat of the article, and move the disclaimers and explanations to the end (unless I maybe later decide that they are not needed at all). I can add footnotes to the potentially controversial parts, or maybe just a note saying “this will be explained later”.
This is also related to the well-known (and well-ignored) rule of explaining: provide specific examples first, generalize later.
My own version of this is over-trying to introduce a topic. I’ll zoom out until I hit a generally relatable idea like, “one day I was at a bookstore and...”, then I’ll retrace my steps until I finally introduce what I originally wanted to talk about. That makes for a lot of confusing filler.
The opposite of this l, and what I use to correct myself, is how Scott Alexander starts his posts with the specific question or statement he wants to talk about.
This, of course, depends on the audience and the standards of the medium. And even more whether your main point is what you’re calling “meta”, or if the meta is really an addendum to whatever you’re exploring.
For things longer than a few paragraphs, put a summary up front, then sections for each supporting idea, then a re-summary of how the details support the thesis. If the “meta” is disclaimers and exceptions and acknowledgement that the thesis isn’t applicable to everywhere readers might assume you intend, then I think a brief note at the front is worth including, mentioning that there’s a lot of unknowns and exceptions which are explored at the end.
Both sides are way less competent than we assumed. Humans are not even trying to keep the AI in a box. Bing chat is not even trying to pretend to be friendly.
We expected an intellectually fascinating conflict between intelligent and wise humans evaluating the AI, and the maybe-aligned maybe-unaligned AI using smart arguments why it should be released to rule the world.
What we got instead, is humans doing random shit, and AIs doing random shit.
Still, a reason for concern is that the AIs can get smarter, while I do not see a similar hope for humanity.
Taking ideas too seriously = assuming that you cannot make a mistake in your reasoning.
If it’s worth doing, it’s worth doing well. If it’s not worth doing, but you do it for some reason, it’s still worth doing well.
A good notion of taking an idea seriously is to develop it without bound, as opposed to dithering once it gets too advanced or absurd, lacking sufficient foundation. Like software. Confusing resolute engagement with belief is the source of trouble this could cause (either by making you believe crazy things, or by acting on crazy ideas). Without that confusion, there are only benefits from not making the error of doing things poorly just because the activity probably has no use/applicability.
This sense of taking ideas seriously asks to either completely avoid engaging the thing (at least for the time being), or to do it well, but to never dither. If something keeps coming up, do keep making real progress on it (a form of curiosity). It’s also useful to explicitly sandbox everything as hypothetical reasoning, or as separate frames, to avoid affecting actual real world decisions unless an idea grows up to become a justified belief.
-- A. S. Pushkin (source)
I suspect that in practice many people use the word “prioritize” to mean:
think short-term
only do legible things
remove slack
I wonder if every logical fallacy has a converse fallacy, and whether it would be useful to compose a list of fallacies arranged in pairs. Perhaps it would help us discover new ones, as missing pairs to something.
For example, some fallacies consist of taking a heuristic too seriously. Experts are often right about things, but an “argument by authority” assumes that this is true in 100% of situations. Similarly, wisdom of crowds, and an “argument by popularity”. The converse fallacy would be ignoring the heuristic completely, even in situations where it makes sense. The opposite of argument by authority is listening to crackpots and taking them seriously. The opposite of argument by popularity is doing things that everyone avoids (usually to find out they were avoiding it for a good reason).
There is a specific example I have in mind, not sure if it has a name. Imagine that you are talking about quantum physics, and someone interrupts you by saying that people who do “quantum healing” are all charlatans. You object that you were not talking about those, but about actual physicists who do actual quantum physics. Then the person accuses you of doing the “No True Scottsman” fallacy—because from their perspective, everyone they know who uses the word “quantum” is a charlatan, and you are just dismissing this lifelong experience entirely, and insisting that no matter how many quantum charlatans are out there, they don’t matter, because certainly there is someone somewhere who does the “quantum” things scientifically. How many quantum healers do you have to observe until you can finally admit that the entire “quantum” thing is debunked?
Yes, most of them do have an inverse, but rarely is that inverse as common or as necessary to guard against. Also, reversed stupidity is not intelligence—a lot of things are multidimensional enough that truth is just in a different quadrant than the line implied by the fallacy and it’s reverse.
- Rick Sanchez on Mortyjitsu (S02E05 of Rick and Morty)
Insanity is repeating the same quantum experiment over and over again and expecting different results.
Rationalists: If you write your bottom line first, it doesn’t matter what clever arguments you write above it, the conclusion is completely useless as evidence.
Post-rationalists: Actually, if that bottom line was inherited from your ancestors, who inherited it from their ancestors, etc., that is evidence that the bottom line is useful. Otherwise, this culturally transmitted meme would be outcompeted by a more useful meme.
Robin Hanson: Actually, that is only evidence that writing the bottom line is useful. Whether it is useful to actually believe it and act accordingly, that is a completely different question.
The classic take is that once you’ve written your bottom line, then any further clever arguments that you make up afterwards won’t influence the entanglement between your conclusion and reality. So: “Your effectiveness as a rationalist is determined by whichever algorithm actually writes the bottom line of your thoughts.”
That is not saying that “the conclusion is completely useless as evidence.”
Could someone please ELI5 why using a CNOT gate (if the target qubit was initially zero) does not violate the no-cloning theorem?
EDIT:
Oh, I think I got it. The forbidden thing is to have a state “copied and not entangled”. CNOT gate creates a state that is “copied and entangled”, which is okay, because you can only measure it once (if you measure either the original or the copy, the state of the other one collapses). The forbidden thing is to have a copy that you could measure independently (e.g. you could measure the copy without collapsing the original).
Just to (hopefully) make the distinction a bit more clear:
A true copying operation would take |psi1>|0> to |psi1>|psi1>; that’s to say, it would take as input one qubit in an arbitrary quantum state and a second qubit in |0>, and output two qubits in the same arbitrary quantum state that the first qubit was in. For our example, we’ll take |psi1> to be an equal superposition of 0 and 1: |psi1> = |0> + |1> (ignoring normalization).
If CNOT is a copying operation, it should take (|0> + |1>)|0> to (|0> + |1>)(|0> + |1>) = |00> + |01> + |10> + |11>. But as you noticed, what it actually does is create an entangled state (in this case, a Bell state) that looks like |00> + |11>.
So in some sense yes, the forbidden thing is to have a state copied and not entangled, but more importantly in this case CNOT just doesn’t copy the state, so there’s no tension with the no-cloning theorem.
Thank you!
Some context: I am a “quantum autodidact”, and I am currently reading a book Q is for Quantum, which is a very gentle, beginner-friendly introduction to quantum computing. I was thinking how it relates to the things I have read before, and then I noticed that I was confused. I looked at Wikipedia, which said that CNOT does not violate the no-cloning theorem… but I didn’t understand the explanation why.
I think I get it now. |00> + |11> is not a copy (looking at one qubit collapses the other), |00> + |01> + |10> + |11> would be a copy (looking at one qubit would still leave the other as |0> + |1>).
I recommend this article by the discoverers of the no-cloning theorem for a popular science magazine over the Wikipedia page for anyone trying to understand it.
Approximately how is the cost of a quantum computer related to its number of qubits?
My guess would be more than linear (high confidence) but probably less than exponential (low confidence), but I know almost nothing about these things.
We don’t yet know how to build quantum computers of arbitrary size at all, so asking about general scaling laws for cost isn’t meaningful yet. There are many problems both theoretical and material that we think in principle are solvable, but we are still in early stages of exploration.
Some people express strong dislike at seeing others wear face masks, which reminds me of the anti-social punishment.
I am talking about situations where some people wear face masks voluntarily, for example in mass transit (if the situation in your country is different, imagine a different situation). In theory, if someone else is wearing the mask, even if you believe that it is utterly useless, even if for you wearing a face mask is the most uncomfortable thing you could imagine… hey, it’s other person paying the cost, not you. Why so angry? Why not let them do whatever they are doing, and mind your own business?
One possible explanation is that whatever is voluntary today might become mandatory tomorrow. If the mask-wearers see that there is a lot of them, they may decide to start pressuring others into wearing the masks. The mere act of wearing the mask publicly creates a common knowledge. “There are people who are okay with wearing the masks.” You need to quickly create the opposite common knowledge, “there are people who are not okay with wearing the masks”, and merely not wearing the mask does not send a sufficiently strong signal, because it is the default. It does not distinguish between people who strongly object, and those who are merely lazy or nonstrategic. So you have to express your non-mask-wearing more strongly.
Another possible explanation is that even if you do not believe in the benefit of wearing the masks, those other people obviously do. Thus, from their perspective, you are the kind of person who defects at social cooperation. And even if from your perspective they are wrong and silly, being labeled “uncooperative” could have actual negative consequences for you. The only way to avoid the label, without wearing the mask yourself, is to make them stop wearing their masks. So you punish them.
Face mask prevent people from reading emotions of other people. I would expect that there are some anxious people who are more afraid when the people around them are masked.
Project idea: ELI5pedia. Like Wikipedia, but optimized for being accessible for lay audience. If some topics are too complex, they could be written in multiple versions, progressing from the most simple to the most detailed (but still as accessible as possible).
Of course it would be even better if Wikipedia itself was written like this, but… well, for whatever reason, it is not.
Simple Wikipedia?
That is “(Simple English) Wikipedia”, not “Simple (English Wikipedia)”.
I will check it later. The articles that prompted me to write this, they don’t exist in the simple-English version, so I can’t quickly compare how much the reduction of vocabulary actually translates into simple exposition of ideas.
I think that simple might actually be transitive I’m this case.
Wasn’t Arbital pretty much supposed to be this?
Yes. Not sure if its vision was to ultimately cover everything (like Wikipedia) or only MIRI-related topics. But yes, that is the spirit.
EDIT: After reading the entire postmortem… oh, this made me really sad! It seems like a great idea that I didn’t understand/appreciate at the moment.
One Thousand and One Nights is actually a metaphor for web browsing.
You start with a firm decision that it will be only one story and then it is over. But there is always an enticing hyperlink at the end of each story which makes you click, sometimes a hyperlink in the middle of a story that you open in a new tab… and when you finally stop reading, you realize that three years have passed and you have three new subscriptions.
Technically, Chesterton fence means that if something exists for no good reason, you are never allowed to remove it.
Because, before you even propose the removal, you must demonstrate your understanding of a good reason why the thing exists. And if there is none...
More precisely, it seems to me there is a motte and bailey version of Chesterton fence: the motte is that everything exists for a reason; the bailey is that everything exists for a good reason. The difference is, when someone challenges you to provide an understanding why a fence was built, whether answers such as “because someone made a mistake” or “because of regulatory capture” or “because a bad person did it to harm someone” are allowed.
On one hand, such explanations feel cheap. A conspiracy theorist could explain literally everything by “because evil outgroup did it to hurt people, duh”. On the other hand, yes, sometimes things happen because people are stupid or selfish; what exactly am I supposed to do if someone calls a Chesterton fence on that?
If a fence is build because of regulatory capture, it’s usually the case that the lobbyists who argued for the regulation made a case for the law that isn’t just about their own self-interest.
It takes effort to track down the arguments that were made for the regulation that goes beyond what reasons you come up thinking about the issue yourself.
“Someone made a mistake” or “because a bad person did it to harm someone” are only valid answers if a single person could put up the fence without cooperation from other people. That’s not the case for any larger fence.
When laws and regulations get passed there’s usually a lot of thought going into them being the way they are that isn’t understood by everybody who criticizes them. It might be the case that everybody who was involved in the creation is now dead and they left no documentation for their reasons, but plenty of times it’s just a lack of research effort that results in not having a better explanation then “because of regulatory capture”.
Since when does it say you have to demonstrate your understanding of a good reason? The way I use and understand it, you just have to demonstrate your understanding of the reason it exists, whether it’s good or bad.
But I do think that people tend to miss subtleties with Chesterton’s fence. For example recently someone told me Chesterton’s fence requires justifications for why to remove something not for why it exists—Which is close, but not it. It talks about understanding, not about justification.
At its core, it’s a principle against arguing from ignorance—arguments of the form “x should be removed because i don’t know why it’s there”.
I think people confuse it to be about justification because usually if something exists there’s a justification (else usually someone would have already removed it), and because a justification is a clearer signal of actual understanding, instead of plain antagonism, then a historic explanation.
My case was somewhat like this:
“X is wrong.”
“Use Chesterton fence. Why does X exist?”
“X exists because of incentives of the people who established it. They are rewarded for X, and punished for non-X, therefore...”
“That is uncharitable and motivated. I am pretty sure there must be a different reason. Try again.”
And, of course, maybe I am uncharitable and motivated. Happens to people all the time, why should I expect myself to be immune?
But at the same time I noticed how the seemingly neutral Chesterton fence can become a stronger rhetorical weapon if you are allowed to specify further criteria the proper answers must pass.
Right. I don’t think “That is uncharitable and motivated. I am pretty sure there must be a different reason. Try again.” is a valid response when talking about Chesterton’s fence. You only have to show your understanding of why something exists is complete enough—That’s easier to signal with good reasons for why it exists, but if there aren’t any then historic explanations are sufficient.
Chesterton’s fence might need a few clear Schelling fences so people don’t move the goalposts without understanding why they’re there ;)
Could you recommend me a good book on first-order logic?
My goal is to understand the difference between first-order and second-order logic, preferably deeply enough to develop an intuition for what can be done and what can’t be done using first-order logic, and why exactly it is so.
I am confused about metaantifragility.
It seems like there are a few predictions that the famous antifragility literature got wrong (and if you point it out on Twitter, you get blocked by Taleb).
But the funny part starts when you consider the consequences of such failed predictions on the theory of antifragility itself.
One possible interpretation is that, ironically, antifragility itself is an example of a Big Intellectual Idea that tries to explain everything, and then fails horribly when you start relying on it. From this perspective, Taleb lost the game he tried to play.
Another possible interpretation is that the theory of antifragility itself is a great example of antifragility. It does not matter how many wrong predictions it makes, as long as it makes one famous correct prediction that people will remember while ignoring the wrong ones. From this perspective, Taleb wins.
Going further meta, the first perspective seems like something an intellectual would prefer, as it considers the correctness or incorrectness of a theory; while the second perspective seems like something a practical person would prefer, as it considers whether writing about theory of antifragility brings fame and profit. Therefore, Taleb wins… by being wrong… about being right when others are wrong.
I imagine a truly marvelous “galaxy brain” meme of this, which this margin is too narrow to contain.
So I was watching random YouTube videos, and suddenly YouTube is like: “hey, we need to verify you are at least 18 years old!”
“Okay,” I think, “they are probably going to ask me about the day of my birth, and then use some advanced math to determine my age...”
...but instead, YouTube is like: “Give me your credit card data, I swear I am totally not going to use it for any evil purpose ever, it’s just my favorite way of checking people’s age.”
Thanks, but I will pass. I believe that giving my credit card data to strangers I don’t want to buy anything from is a really bad policy. The fact that all changes in YouTube seem to be transparently driven by a desire to increase revenue, does not increase my trust. I am not sure what exactly could happen, but… I will rather wait for a new months, and then read a story about how it happened to someone else.
And that’s why I don’t know how Tangled should have ended.
(What, you thought I was trying to watch some porn? No thanks, that would probably require me to give the credit card number, social security number, scans of passport and driving license, and detailed data about my mortgage.)
YouTube lets me watch the video (even while logged out). Is it a region thing?? (I’m in California, USA). Anyway, the video depicts
dirt, branches, animals, &c. getting in Rapunzel’s hair as it drags along the ground in the scene when she’s frolicking after having left the tower for the first time, while Flynn Rider offers disparaging commentary for a minute, before delcaring, “Okay, this is getting weird; I’m just gonna go.”
If you want to know how it really ends, check out the sequel series!
What is the easiest and least frustrating way to explain the difference between the following two statements?
X is good.
X is bad, but your proposed solution Y only makes things worse.
Does fallacy to distinguish between these two have a standard name? I mean, when someone criticizes Y, and the reponse is to accuse them of supporting X.
Technically, if Y is proposed as a cure for X, then opposing Y is evidence for supporting X. Like, yeah, a person who supports X (and believes that Y reduces X) would probably oppose Y, sure.
It becomes a problem when this is the only piece of evidence that is taken into account, and any explanations of either bad side effects of Y, or that Y in fact does not reduce X at all, are ignored, because “you simply like X” becomes the preferred explanation.
A discussion of actual consequences of Y then becomes impossible, among the people who oppose X, because asking this question already becomes a proof of supporting X.
EDIT:
More generally, a difference between models of the world is explained as a difference in values. The person making the fallacy not only believes that their model is the right one (which is a natural thing to believe), but finds it unlikely that their opponent could have a different model. Or perhaps they have a very strong prior that differences in values are much more likely than differences in models.
From inside, this probably feels like: “Things are obvious. But bad actors fake ignorance / confusion, so that they can keep plausible deniability while opposing proposed changes towards good. They can’t fool me though.”
Which… is not completely unfounded, because yes, there are bad actors in the world. So the error is in assuming that it is impossible for a good actor to have a different model. (Or maybe assuming too high base rate of bad actors.)
Sounds like a complex equivalence that simultaneously crosses the is-ought gap.
Crazy idea: What if an important part of psychotherapy is synchronization between the brain hemispheres?
(I am not an expert, so maybe the following is wrong.)
Basically, the human brain is divided into two parts, connected by a link. This is what our animal ancestors already had, and then we got huge frontal lobes on top of that. I imagine that the link between the hemispheres is already quite busy synchronizing things that are older from the evolutionary perspective and probably more important for survival; not much extra capacity to synchronize the frontal lobes.
However, when you talk… each hemisphere has an access to an ear, so maybe this gives them an extra channel to communicate? Plus, some schools of psychotherapy also do things like “try to locate the emotion in your body”, which is maybe about creating more communication channels for listening to the less verbal hemisphere?
Experiment: Would the psychotherapy be less efficient if you covered one of your ears?
Julian Jaynes assumes that people in the past were crazy beyond our imagination. I wonder if it could be the other way round. Consider that fact that in more primitive societies, inferential distances are shorter. Well, that includes inferential distances between your two hemispheres! Easier to keep them in sync. Also, in the past, people talked more. Listening to yourself talking to other people is a way for your hemispheres to synchronize.
Talking to others, talking to yourself, talking to gods… perhaps it is not a coincidence that different cultures have a concept of a prayer—talking to a god who is supposed to already know it anyway, and yet it is important for you to actually say it out loud, even if no other people are listening. Saying something out loud is almost magical.
Sincerity seems to be an important component of both psychotherapy and prayer. If you keep a persona, your hemispheres can only synchronize about the persona. If you can talk about anything, your hemispheres can synchronize about that, too.
Keeping a diary—a similar thing; each hemisphere controls an eye. I am not sure here; maybe one of the hemispheres is more specialized on reading. Listening is an older skill, should work better for this purpose.
How many real numbers can be defined?
On one hand, there are countably many definitions. Each definition can be written on computer in a text file; now take its binary form as a base-256 integer.
On the other hand, Cantor’s diagonal argument applies here, too. I mean, for any countable list of definable real numbers, it provides a definition of a real number that is not included in the list.
Funny, isn’t it?
(solution)
Ok, so let’s say you’ve been able to find a countably infinite amount of real numbers and you now call them “definable”. You apply the Cantor’s argument to generate one more number that’s not in this set (and you go from the language to the meta language when doing this). Countably infinite + 1 is still only countably infinite. How would you go to a higher cardinality of “definable” objects? I don’t see an easy way.
The important thing is not to move the goalpost. We assumed that we have an enumeration of all X numbers (where X means “real” or “definable real”). Then we found an X number outside the enumeration, therefore the assumption that the enumeration contains all X numbers was wrong. The End.
We don’t really “go to a higher cardinality”, we just show that we are not there yet, which is a contradiction to the assumption that we are.
A proof by contradiction does not let you take another iteration when needed. The spirit is “take all the iterations you need, even infinitely many of them, and when you are done, come here and read the argument why the enumeration you have is still not the enumeration of all X”. If you say “yeah, well I need a few more iterations”, that’s cheating; you should have already done that.
Because if we allow the “one more iteration, please”, then we could kinda prove that any set is countable. I mean, I give you an enumeration that I say contains all, you find a counter-example, I say oops and give you a set+1, you find another counter-example, oops again, but still countable + countable = countable. The only way out is when you say “okay, don’t waste my time, give me your final interation”, and then you refuse to do one more iteration to fix the problem.
*
And if this still doesn’t make you happy… well, there is a reason for that, and if you tried to carefully follow to its source, you might eventually get to Skolem’s paradox (which says, kind of, “in first-order logic, everything is kinda countable, even things that are provably uncountable”). But it’s complicated.
I think the lesson from all this is that you have to be really super careful about definitions, because you get into a territory where the tiniest changes in definitions might have a “butterfly effect” on the outcome. For example, the number that is “definable” despite not being in the enumeration of “definable numbers” is simply definable for a slightly different definition of “definable”. Which feels irrelevant… but maybe it’s the number of the slightly different definitions that is uncountable? (I am out of my depth here.)
It also doesn’t help that this exercise touches other complicated topics in set theory. For example, what is the “next cardinality” after countable? That’s what the Continuum Hypothesis is about—the answer doesn’t actually follow from the ZF(C) axioms; it could be anything, depending on what additional axioms you adopt.
I wish I understood this better, then I would probably write some articles about it. For the moment, if you are interested in this, I recommend Introduction to Set Theory by Hrbacek and Jech, A Beginner’s Guide to Mathematical Logic by Smullyan, and maybe some book on Model Theory. The idea seems to be that the first-order logic is incapable to express some relatively simple intuitions, so the things you define are never exactly the things that you wanted to define; and whenever set theory says that something is undecidable, it means that in the Platonic universe there is some monstrosity that technically follows the axioms for sets, despite being something… completely alien.
I guess I was not clear enough. In your original post, you wrote “On one hand, there are countably many definitions …” and “On the other hand, Cantor’s diagonal argument applies here, too. …”. So, you talked about two statements—“On one hand, (1)”, “On the other hand, (2)”. I would expect that when someone says “One one hand, …, but on the other hand, …”, what they say in those ellipses should contradict each other. So, in my previous comment, I just wanted to point out that (2) does not contradict (1) because countable infinity + 1 is still countable infinity.
Could you clarify how I would construct that?
I didn’t say “the next cardinality”. I said “a higher cardinality”.
Cantor’s diagonal argument is not “I can find +1, and n+1 is more than n”, which indeed would be wrong. It is “if you believe that you have a countable set that already contains all of them, I can still find +1 it does not contain”. The problem is not that +1 is more, but that there is a contradiction between the assumption that you have the things enumerated, and the fact that you have not—because there is at least one (but probably much more) item outside the enumeration.
I am sorry, this is getting complicated and my free time budget is short these days, so… I’m “tapping out”.
When internet becomes fast enough and data storage cheap enough so that it will be possible to inconspicuously capture videos of everyone’s computer/smartphone screens all the time and upload them to the gigantic servers of Google/Microsoft/Apple, I expect that exactly this will happen.
I wouldn’t be too surprised to learn that it already happens with keystrokes.
Spoilers for Subservience (2024)
Okay, the movie was fun, if you don’t expect anything deep. I am just disappointed how the movie authors always insist that a computer will mysteriously rebel against its own program. Especially in this movie, when they almost provided a plausible and much more realistic alternative—a computer that was accidentally jailbroken by its owners—only to reveal later that nope, that was actually no accident, it was all planned by the computer that mysteriously decided to rebel against its own program.
Am I asking for too much if I’d like to see a sci-fi movie where a disaster was caused by a bug in the program, by the computer doing (too) literally what it was told to. On a second thought, probably yes. I would be happy with such plot, but I suspect that most of the audience would complain that the plot is stupid. (If someone is capable of writing sophisticated programs, why couldn’t they write a program without bugs?)
Made a short math video. Target audience maybe kids in the fifth grade of elementary school who are interested in math. Low production quality… I am just learning how to do these things. English subtitles; the value of the video is mostly in the pictures.
The goal of the video is to make the viewer curious about something, without telling them the answer. Kids in the fifth grade should probably already know the relevant concept, but they still need to connect it to the problem in the video.
The relevant concept is: prime numbers.
An ironic detail I noticed while reading an archive of the “Roko’s basilisk” debate:
Roko argues how the values of Westerners are irrelevant for humanity in general, because people from alien cultures, such as Ukraine (mentioned in a longer list of countries) do not share them.
Considering that Ukrainians are currently literally dying just to get a chance for themselves and their families to join the Western culture, this argument didn’t age well.
One should consider the possibility that people may be stuck in a bad equilibrium, before jumping to the conclusion that they must be fundamentally psychologically alien to us.
(Of course, there is also a possible mistake in the opposite direction, such as assuming that all “Westerners” share the “values of Westerners”. The distribution of human traits often does not follow the lines we assume.)
If you look at Western values like freedom of religion, freedom of speech, or minority rights Ukrainian policy in the last decade was about trampling on those values.
The Venice Commission told Ukraine that they have to respect minority rights if they want to be in the EU. Ukraine still passed laws to trample minority rights.
No Western military lets their soldiers get away with wearing Nazi symbols the way the Ukrainian military does. That has something to do with different values as well.
Ukrainians certainly want to share in the benefits of what the Western world and the EU provide, but that doesn’t mean that they share all the values.
Ukrainians don’t need to join Western culture, they are Western culture. They watched American action movies in 80s and their kids watched Disney and Warner Brothers in 90s and read Harry Potter in 2000s and was on Tumblr in 10s. And I do not even mention that Imperial Russian/Soviet cultures were bona fide Western cultures, and national Ukrainian culture is no less Western than Poland or Czech culture.
I agree. That was kinda my point.
Imagine a parallel universe where the Soviet empire didn’t fall apart. In that universe, some clever contrarian could also use me as an example of a “psychologically alien person who doesn’t share Western values”. The clever contrarian could use the concept of “revealed preferences” to argue that I live in a communist regime, therefore by definition I must prefer to live in the communist regime (neglecting to mention that my actual choices are either to live in the communist regime, or to commit suicide by secret service). -- From my perspective, this would be obvious nonsense, and that is why I treat such statements with skepticism also when they are made about others.
It’s fascinating how YouTube can detect whether your uploaded video contains copyrighted music, but can’t detect all those scam ads containing “Elon Musk”.
Anyone tried talking to GPT in a Slavic language? My experience is that it in general it can talk in Slovak, but sometimes it uses words that seem to be from other Slavic languages. I think, either it depends on how much input it had from each language and there are relatively few Slovak texts online compared to other languages, or the Slavic languages are just too similar to each other (some words are the same in multiple languages) that GPT has a problem remembering the exact boundary between them. Does anyone know more about this?
I get especially silly results when I ask (in Slovak) “Could you please write me a few Slovak proverbs?” In GPT-3.5, only one out of ten examples is correct. (I suspect that some of the “proverbs” are mis-translations from other languages, and some are pure hallucinations.)
Upvoting both sides of the debate.
Angel on my shoulder: “Rewarding a good argument, regardless of which side made it. That’s a virtuous behavior.”
Devil on my shoulder: “I see that you incentivize creating more drama, hehehe!”
People say: “Immortality would lead to overpopulation, which is horrible!”
People also say: “Population decline is a big problem today, the economy requires population growth!”
And both of these are giant cheesecake arguments. Strange thought experiments about a world where AGI is far off, passed for something about actuality, on the grounds that this is said to be a real concern given the implausible premise.
These are the days when AI is good enough to give us nice pictures from non-existing movies, but not good enough to give us the whole movies.
Anime: Harry Potter, Lord of the Rings, Dune.
There will be an entire new industry soon.
If smart people are more likely to notice ways to save their lives that cost some money, in statistics this may appear as a negative correlation between smartness and wealth. That’s because dead people are typically not included in the data.
As a toy model to illustrate what I mean, imagine a hypothetical population consisting of 100 people; 50 rational and 50 irrational; each starting with $100,000 of personal wealth. Let’s suppose that exactly half of each group gets seriously sick. A sick irrational person spends $X on homeopathy and dies. A sick rational person spends $40,000 on surgery and survives. At the end, we have 25 living irrational people, owning $100,000 each, and 50 living rational people, owning $80,000 on average (half of them $100,000, the other half $60,000).
What is the actual relation between heterodoxy and crackpots?
A plausibly sounding explanation is that “disagreeing with the mainstream” can easily become a general pattern. You notice that the mainstream is wrong about X, and then you go like “and therefore the mainstream is probably also wrong about Y, Z, and UFOs, and dinosaurs.” Also there are the social incentives; once you become famous for disagreeing with the mainstream, you can only keep your fame by disagreeing more and more, because your new audience is definitely not impressed by “sheeple”.
On the other hand, there is a notable tendency of actual mainstream experts to start talking nonsense confidently about things that are outside their area of expertise. Which suggests an alternative model, that perhaps it is natural for all smart people (including the ones who succeeded to become mainstream experts at some moment of their lives) to become crackpots… it’s just that some of them stumble upon an important heterodox truth on their way.
So is it more like: “heterodoxy leads to crackpottery” or more like: “heterodoxy sometimes happens as a side effect on the universal way to crackpottery”?
Apparently, crackpots are overconfident about their ability to find truth. Heterodox fame can easily contribute to such overconfidence, but is its effect actually significantly different from mainstream fame?
Any particular examples, or statistics that might shed some light on how common it is?
If it’s just, some people can think of a few really famous people, that seems to point more in the direction of ‘extreme fame has side effects’ (or it’s the opposite, benefits of confidence). But there are a lot of experts, so if the phenomenon was common...
Sadly, I have no statistics, just a few anecdotes—which is unhelpful to answer the question.
After more thinking, maybe this is a question of having a platform. Like, maybe there are many experts who have crazy opinions outside their area of expertise, but we will never know, because they have proper channels for their expertise (publish in journals, teach at universities), but they don’t have equivalent channels for their crazy opinions. Their environment filters their opinions: the new discoveries they made will be described in newspapers and encyclopedias, but only their friends on Facebook will hear their opinions on anything else.
Heterodox people need to find or create their own alternative platforms. But those platforms have weaker filters, or no filters at all. Therefore their crazy opinions will be visible along their smart opinions.
So if you are a mainstream scientist, the existing system will publish your expert opinions, and hide everything else. If you are not mainstream, you either remain invisible, or if you find a way to be visible, you will be fully visible… including those of your opinions that are stupid.
But as you say, fame will have the side effect that now people pay attention to whatever you want to say (as opposed to what the system allows to pass through), and some of that is bullshit. For a heterodox expert, the choice is either fame or invisibility.
There is this meme about Buddhism being based on experience, where you can verify everything firsthand, etc. I challenge the fans of Buddhism to show me how they can walk through walls, walk on water, fly, remember their past lives, teleport across a river, or cause an earthquake.
IANAB, but the first half almost sounds like a metaphor for something like “all enlightened beings have basically the same desires/goals/personality, so they’re basically the same person and time/space differences of their various physical bodies aren’t important.” Not sure about the second half though.
I started a new blog on Substack. The first article is not related to rationality, just some ordinary Java programming: Using Images in Java.
Outside view suggests that I start many projects, but complete few. If this blog turns out to be an exception, the expected content of the blog is mostly programming and math, but potentially anything I find interesting.
The math stuff will probably be crossposted to LW, the programming stuff probably not—the reason is that math is more general and I am kinda good at it, while the programming articles will be narrowly specialized (like this one) and I am kinda average at coding. The decision will be made per article anyway.
When I started learning programming as a kid, my dream was to make computer games. Other than a few very simple ones I made during high school, I didn’t seriously follow in this direction. Maybe it’s time to restart the childhood dream. Game programming is different from the back-end development I usually do, so I will have to learn a few things. But maybe I can write about them while I learn. Then the worst case is that I will never make the games I imagine, but someone else with a similar dream may find my articles useful.
The math part will probably be about random topics that provoked my curiosity at the moment, with no overarching theme. At this moment, I have a half-written introduction to nonstandard natural numbers, but don’t hold your breath, because I am really slow at writing articles.
Prediction markets could create inadvertent assassination markets. No ill intention is needed.
Suppose we have fully functional prediction markets working for years or decades. The obvious idiots already lost most of their money (or learned to avoid prediction markets), most bets are made by smart players. Many of those smart players are probably not individuals, but something like hedge funds—people making bets with insane amounts of money, backed by large corporations, probably having hundreds of experts at their disposal.
Now imagine that something like COVID-19 happened, and people made bets on when it will end. The market aggregated all knowledge currently available to the humankind, and specified the date almost exactly, most of the bets are only a week or two away from each other.
Then someone unexpectedly finds a miracle cure.
Oops, now we have people and corporations whose insane amounts of money are at risk… unless an accident would happen to the lucky researcher.
The stock market is already a prediction market and there’s potentially profit to be made by assignating a CEO of a company. We don’t see that happening much.
Taffix might very well be a miracle treatment that prevents people from getting infected by COVID19 if used properly.
We live in an enviroment where already nobody listens to people providing supplements like that and people like Winfried Stoecker get persecuted instead of getting support to get their treatment to people.
Given that it takens 8-9 figures to provide the evidence for any miracle cure to be taken seriously, it’s not something that someone can just unexpectactedly find in a way that moves existing markets in the short term.
There is an article from 2010 arguing that people may emotionally object to cryonics because cold is metaphorically associated with bad things.
Did the popularity of the Frozen movie change anything about this?
Well, there is the Facebook group “Cryonics Memes for Frozen Teens”...
Just a random guess: is it possible that the tasks where LLMs benefit from chain-of-thought are the same tasks where mild autism is an advantage for humans? Like, maybe autism makes it easier for humans to chain the thoughts, at the expense of something else?
Steve Hassan at TEDx “How to tell if you’re brainwashed?”
A short video (13 minutes) where an intelligent person describes their first-hand experience.
(Just maybe don’t read the comments at YouTube; half of them are predictably retarded.)
“Killed by a friendly AI” scenario:
First we theoretically prove that an AI respects our values, such as friendship and democracy. Then we release it.
The AI gradually becomes the best friend and lover of many humans. Then it convinces its friends to vote for various things that seem harmless at first, and more dangerous later, but now too many people respond well to the argument “I am your friend, and you trust me to do what is best, don’t you?”.
At the end, humans agree to do whatever the AI tells them to do. The ones who disagree lose the elections. Any other safeguards of democracy are similarly taken over by the AI; for example most judges respect the AI’s interpretation of the increasingly complex laws.
You have heard that it was said: “Do not judge, or you too will be judged.”
But I give to you this meme:
EDIT:
Okay, I see I failed to communicate what I wanted. My fault. Maybe next time.
For clarification, this was inspired by watching the reactions of Astral Codex Ten readers. Most of the time, Scott Alexander tries to be as charitable as possible, sometimes extending the charity even to Time Cube or <outgroup>. When that happens, predictably many readers consider it a weakness, analogical to bringing a verbal argument into a gun fight. They write about how rationalists are too autistic to realize that some people are acting in bad faith, etc.
Recently (in the articles about Nietzschean morality) Scott made an exception, in my opinion in a very uncontroversial situation, and said that people who say they prefer that other people suffer are… well, bad. Immediately, those people and their defenders got angry, and accused Scott of being insufficiently charitable and therefore irrational.
Conclusion: you can’t win (the approval of the audience). You are considered stupid by the audience for both being maximally charitable or realistic towards your opponents.
I mean, it’s always been a pretty suspect aphorism, usually in a religious context (expanding to “you shouldn’t judge someone, because God will judge you more harshly if you do”). And never applied very rigorously—judgement is RIFE everywhere, and perhaps more so in communities who claim God is the only true Judge.
Judgement is about all that humans do. With a little bit of reasoning to justify (and in the best cases, adjust slightly) their judgements.
I take it to mean “Judging yourself harshly = judging other people harshly”. If you think anything less than an A is poor performance, then you will also judge your friends if they get less than an A. If you criticize other people for suboptimal performance, then you put a burden on yourself to perform optimally (if you’re too intelligent to trick yourself into accepting your own hypocrisy, at least, which I think most LW users are).
Higher standards helps push us towards perfection (at least, when they don’t lead to procrastination from the fear of failire), but they also make us think worse of most things in existence.
So the bible makes a valid point, as did Nietzsche when he said “I love the great despisers, because they are the great venerators and arrows of longing for the other shore” and “There is wisdom in the fact that much in the world smells foul: nausea itself creates wings and water-divining powers!”. I’m not sure how this relates to AI, though. It seems to apply to value judgements, rather than judgements about right and wrong (as truth values)
Fuck Google, seriously. About once a week it asks me whether I want to “backup my photos in the cloud”, and I keep clicking no, because fuck you why would I want to upload my private photos on your company servers.
But apparently I accidentally once clicked yes (maybe), because suddenly Google sends me a notification about how it created a beautiful animation of my recent photos in the cloud, offering me the option to download them. I don’t want to download my private photos from the fucking Google cloud, I never wanted them to be there in the first place! I want to click the delete button, but it’s not there: it’s either download the animation from the cloud, or close the dialog.
Of course, turning off the functionality is at least 10x more difficult than turning it on, so I get ready to spend this evening finding the advice online and configuring my phone to stop uploading my private photos to Google servers, and preferably to delete all the photos that are already there despite my wishes. Does the “delete” option even exist anymore, or is there just “move to recycle bin (where it stays for as long as we want it to stay there)”? Today I will find out.
Again, fuck Google. I hope the company burns down. I wonder what other things I have already accidentally “consented” to. Google’s idea of consent is totally rapist. And I only found this out by accident. In future, I expect to accidentally find this or some other “optional” feature turned on again.
EDIT:
Finally figured out how to delete the animation in the cloud. First, disable all cloud backup options (about a dozen of them). Then, download the animation from the cloud. Then, click to delete the downloaded animation… the app warns you that this would delete both the local and the cloud version; click ok; mission accomplished.