I can understand your point, but I am not persuaded yet. Let me maybe clarify why. During the year and a half of COVID, the in-person workshops were not possible. During this time, there were people, who would strongly benefit from the workshop, and the workshop would be helpful at this time (for example, they were making a career choice). Some of them can allow private places for the time of the workshop. It seems that for them, during this time the online workshop would be certainly more beneficial than no workshop at all. Moreover, conducting at least one online workshop would be a good experiment that would give useful information. It is totally not obvious to me why the priors that “online workshop is useless or harmful, taking into account opportunity cost” are so high that this experiment should not be conducted.
Yes, I hope someone from CFAR can maybe explain it better to me.
Valentin2026
It is a good justification for this behavior, but it does not seem to be the most rational choice. Indeed, one could specify that the participant of the online workshop must have a private space (own bedroom, office, hotel room, remote place in a park—whatever fits). I am pretty sure there is a significant number of people, who would prefer an online workshop to the offline one (especially when all offline are canceled due to COVID), and who have or can find a private space for the duration of the workshop. To say that we are not doing it because some people do not have privacy is like for the restaurant to stop offer meat to everyone because there are vegans among customers. Of course, online workshop is not for everyone, but there are people for whom it would work.
I agree that for some people physical contact (like hugs, handshaking etc.) indeed means a lot. However, it is not for everyone. Moreover, even if the online workshop is less effective due to lack of this spirit, is it indeed so ineffective that it is worse than no workshop at all? Finally, why just not to try? It sounds like a thing that should be tried at least one time, and if it fails—well, then we see that it fails.
Yes, I hope someone who attended CFAR (or even somehow related to it) would see this question and give their answer.
[Question] Why there are no online CFAR workshops?
Are there any other examples when rationality guides you faster than the scientific approach? If so it would be good to collect and mention them. If no I am pretty suspicious about QM one as well.
[Question] Erratum for “From AI to Zombies”
First of all, it is my mistake—in the paper they used pain more like a synonym to suffering. They wanted to clarify that the animal avoids tissue damage (heat, punching, electric shock etc.) not just on the place, but learns to avoid it. To avoid it right there is simply nociception that can be seen in many low-level animals.
I don’t know much about the examples you mentioned. For example, bacterias certainly can’t learn to avoid stimuli associated with something bad for them. (Well, they can on the scale of evolution, but not as a single bacteria).
If it is, does it mean that we should all artificial neural network training consider as animal experiments? Should we put something like “code welfare is also animal welfare”?
I agree with the point about the continuous ability to suffer rather than a threshold. I totally agree that there is no objective answer, we can’t measure sufferings. The problem is, however, that it leaves a practical question that is not clear how to solve, namely how we should treat other animals and our code.
Let me try to rephrase it in terms of something that can be done in a lab and see if I get your point correctly. We should conduct experiments with humans, identifying what causes sufferings with which intensity, and what happens in the brain during it. Then, if the animal has the same brain regions, it is capable to suffer, otherwise, it is not. But it won’t be the functional approach, we can’t extrapolate it blindly to the AI.
If we want the functional approach, we can only look at the behavior. What we do when we suffer, after it, etc. Then being suffers if it demonstrates the same behavior. Here the problem will be how to generalize human behavior to animals and AI.
I like the idea. Basically, you suggest taking the functional approach and advance it. What do you think can be this type of process?
Which animals can suffer?
[Question] Which activities do you prefer to better recover productivity?
Thank you!
Thank you, but it is again like to say: “oh, to solve physics problem you need calculus. Calculus uses real numbers. The most elegant way to introduce real numbers is from rational numbers from natural numbers via Peano axiomatics. So let’s make physicists study Peano axiomatic, set theory and formal logic”.
In any area of math, you need some set theory and logic—but usually in the amount that can be covered in one-two pages.
Thank you, but I would say it is too general answer. For example, suppose your problem is to figure out planet motion. You need calculus, that’s clear. So, according to this logic, you would first need to look at the building blocks. Introduce natural numbers using Peano axioms, then study their properties, then introduce rational, and only then construct real numbers. And this is fun, I really enjoyed it. But does it help to solve the initial problem? Not at all. You can just introduce real numbers immediately. Or, if you care only about solving mechanics problems, you can work with the “intuitive” calculus of infinitesimals, like Newton himself did. It is not mathematically strict, but you will solve everything you need.
So, when you study other areas of math (like probability theory, for example), you need some knowledge of set theory, that’s right. But this set theory is not something profound, which has to be studied separately. It will be introduced in a couple of pages. I don’t know much about the decision theory, does it use more?
[Question] How one uses set theory for alignment problem?
It is worrisome indeed. I would say, it definitely does not help and only increases a risk. However, I don’t think this country-that-must-not-be-named would start the nuclear war first, simply because it has too much to lose and its non-nuclear opportunities are excellent. This may change in future—so yes, there is some probability as well.
That is exactly the problem. Suppose the Plutonia government sincerely believes, that as soon as other countries will be protected, they will help people of Plutonia to overthrow the government? And they kind of have reasons for such belief. Then (in their model of the world) the world protected from them is a deadly threat, basically capital punishment. The nuclear war, however, is horrible, but there are bomb shelters where they can survive and have enough food inside just for themselves to live till natural death.
“When I was talking to Valentine (head of curriculum design at the time) a while ago he said that the spirit is the most important thing about the workshop.”
Now, this already sounds a little bit disturbing and resembling Lifespring. Of course, the spirit is important, but I thought the workshop is going to arm us with instruments we can use in real life, not only in the emotional state of comradeship with like-minded rationalists.