I like the idea. Basically, you suggest taking the functional approach and advance it. What do you think can be this type of process?
Valentin2026
Thank you!
Thank you, but it is again like to say: “oh, to solve physics problem you need calculus. Calculus uses real numbers. The most elegant way to introduce real numbers is from rational numbers from natural numbers via Peano axiomatics. So let’s make physicists study Peano axiomatic, set theory and formal logic”.
In any area of math, you need some set theory and logic—but usually in the amount that can be covered in one-two pages.
Thank you, but I would say it is too general answer. For example, suppose your problem is to figure out planet motion. You need calculus, that’s clear. So, according to this logic, you would first need to look at the building blocks. Introduce natural numbers using Peano axioms, then study their properties, then introduce rational, and only then construct real numbers. And this is fun, I really enjoyed it. But does it help to solve the initial problem? Not at all. You can just introduce real numbers immediately. Or, if you care only about solving mechanics problems, you can work with the “intuitive” calculus of infinitesimals, like Newton himself did. It is not mathematically strict, but you will solve everything you need.
So, when you study other areas of math (like probability theory, for example), you need some knowledge of set theory, that’s right. But this set theory is not something profound, which has to be studied separately. It will be introduced in a couple of pages. I don’t know much about the decision theory, does it use more?
It is worrisome indeed. I would say, it definitely does not help and only increases a risk. However, I don’t think this country-that-must-not-be-named would start the nuclear war first, simply because it has too much to lose and its non-nuclear opportunities are excellent. This may change in future—so yes, there is some probability as well.
That is exactly the problem. Suppose the Plutonia government sincerely believes, that as soon as other countries will be protected, they will help people of Plutonia to overthrow the government? And they kind of have reasons for such belief. Then (in their model of the world) the world protected from them is a deadly threat, basically capital punishment. The nuclear war, however, is horrible, but there are bomb shelters where they can survive and have enough food inside just for themselves to live till natural death.
The problem is that retaliation is not immediate (missiles takes few hours to reach the goal). For example, Plutonia can demonstratively destroy one object and declare that any attempt of retaliation will be retaliated in double. As soon as other country launches N missiles, Plutonia launches 2 N.
Yes, absolutely, it is the underlying thesis.
Well, “democratic transition” will not necessarily solve that (like basically it did not completely resolve the problem with the end of the Cold War), you are right, so actually, the probability must be higher than I estimated—even worse news.
Is there any other options for decreasing the risk?
From a Russian perspective. Well, I didn’t discuss it with officials in the government, only with the friends who support the current government. So I can only say what they think and feel, and of course, it is just anecdotal evidence. When I explicitly discussed with one of them the possibility of the nuclear war, he stated that this possibility is small and as long as the escalation will be beneficial for Russia he will support it.
I don’t want to go here into politics and discuss what type of government would be better for Russia. I was more interested to estimate the probability of the nuclear war (or other catastrophes mentioned on the main post).
When I say use, I mean actually detonating—not necessarily destroying a big city, but initially maybe just something small.
Within the territory is possible, though I think outside is more realistic (I think the army will be eventually to weak to fight the external enemies with modern technology, but will always be able to fight unarmed citizens).
Sorry, I didn’t get what do you mean by “non-dominant political controllership”, can you rephrase it?
Thank you, wonderful series!
How should we deal with the cases when epistemic rationality contradicts instrumental? For example, we may want to use placebo effect because one of our values is that healthy is better than sick, and less pain is better than more pain. But placebo effect is based on the fact that we believe pill to be a working medicine that is wrong. Is there any way to satisfy both epistemic and instrumental rationality?
Hmmm, but I am not saying that the benevolent simulators hypothesis is false and that I just choose to believe in it because it brings a positive effect. Rather opposite—I think that benevolent simulators are highly likely (more than 50% chance). So it is not a method “to believe in things which are known to be false”. It is rather an argument why they are likely to be true (of course, I may be wrong somewhere in this argument, so if you find an error, I will appreciate it).
In general, I don’t think people here want to believe false things.
Of course, placebo is useful from the evolutionary point of view, and it is a subject of quite a lot of research. (Main idea—it is energetically costly to have your immune system always at high alert, so you boost it in particular moments, correlating with pleasure, usually from eating/drinking/sex, which is when germs usually get to the body. If interested, I will find the link to the research paper where it is discussed. ).
I am afraid I still fail to explain what I mean. I do not try to deduce from the observation that we are in a simulation, I don’t think it is possible (unless simulators decide to allow it).
I am trying to see how the belief that we are in simulation with benevolent simulators can change my subjective experience. Notice, I can’t just trick myself to believe only because it is healthy to believe. This is why I needed all this theory above—to show that benevolent simulators are indeed highly likely. Then, and only then, I can hope for the placebo effect (or for real intervention masquerading under placebo effect), because now I believe that it may work. If I could just make myself to believe in whatever I needed, of course I would not need all these shenanigans—but, after being faithful LW reader for a while, it is really hard, if possible at all.
It is exactly the point that there should be no proof of simulation unless simulators want it. Namely, there should be no observable (for us) difference between universe controlled simply by laws of Nature and between one with intervention from simulators. We can’t look at any effect and say—this happens, therefore, we are in the simulation.
The point was the opposite. Assume we are in simulation with benevolent simulators (what, according to what I wrote in the theoretical part of the post, is highly likely). What they can do so that we still was not able to classify this intervention as something outside of laws of nature, but so that our well-being would be improved? What are the practical results of it for us?
By the way, we even do not have to require the ability to change probability. Just the placebo effect is good enough. Consider the person who was suffering from depression, or addiction, or akrasia—and now he is much better. Can a strong placebo (like a very strong religious experience) do it? Well, yes, there were multiple cases. Does it improve well-being? Certainly yes. So the practical point is that if such intervention masquerading under placebo can help, it is certainly worth trying. Of course one can say that I just tricking myself into believing it and then placebo just works, but the point is that I have reasons to believe in it (see theoretical part), and this makes placebo work.
Thank you for directing my attention to the post, I will certainly read it.
I would suggest a minutely subscription. It will be approximately $1/minute, actually close for mine akrasia fine for spending time on job unrelated websites.
Thank you. There was one paper at the post about older adults and calorie restriction. However, it is kind of biased—they have slightly overweight people in the experiment. So yes, calorie restriction is good for overweight. Duh.
Do you know any other studies? Thank you!
It sounds possible. However, before even the first people will get it, there should be some progress with animals, and right now there is nothing. So I would bet it is not going to happen in let’s say next 5 years. (Well, unless we suddenly get a radical progress in creating a superAI that will do it for us, but this is the huge another question on its own).
I would say, I wanted first to think about the very near future, without a huge technological breakthrough. Of course, the immortality and superAI are far more important than anything I mentioned in the original post. However, I think there is a non-negligible likelihood for something from the original post to happen very soon (maybe even this year), while the likelihood of the immortality before the end of this year seems quite negligible.
Let me try to rephrase it in terms of something that can be done in a lab and see if I get your point correctly. We should conduct experiments with humans, identifying what causes sufferings with which intensity, and what happens in the brain during it. Then, if the animal has the same brain regions, it is capable to suffer, otherwise, it is not. But it won’t be the functional approach, we can’t extrapolate it blindly to the AI.
If we want the functional approach, we can only look at the behavior. What we do when we suffer, after it, etc. Then being suffers if it demonstrates the same behavior. Here the problem will be how to generalize human behavior to animals and AI.