If you think you might be in a solipsist simulation, you might try to add some chaotic randomness to your decisions. For example, go outside under some trees and wait till any kind of tree leaf or seed or anything hits your left half of the face, choose one course of action. If it hits the other half of your face, choose another course of action. If you do this multiple times in your life, each of your decisions will depend on the state of the whole earth and on all your previous decisions, since weather is chaotic. And thus the simulators will be unable to get good predictions about you using a solipsist simulation. A potential counterargument is that they analyze your thinking and hardcode this binary random choice, i.e. hardcode the memory of the seed hitting your left side. But then there would need to be an intelligent process analyzing your thinking to try and isolate the randomness. But then you could make the dependence of your strategy on randomness even more complicated.
philip_b
Nice. I have a suggestion how to improve the article. Put a clearly stated theorem somewhere in the middle, in its own block, like in academic math articles.
Why do you hate earworms? To me, they are mildly pleasant. The only moments when I wish I didn’t have an earworm happening at that moment is when I’m trying to remember another tune and the earworm for musicianship purposes and the earworm prevents me from being able to do that.
Instead of inspecting all programs in the UP, just inspect all programs with length less than n. As n becomes larger and larger, this covers more and more of the total probability mass in the up and the total probability mass covered this way approaches 1. What to do about the non-halting programs? Well, just run all the programs for m steps, I guess. I think this is the approximation of UP that is implied.
Well, now I’m wondering—is neural network training chaotic?
This is awesome, I would love more posts like this. Out of curiosity, how many hours have you and your colleague spent on this research.
In my personal experience, exposure therapy did help me with the fear of such “extreme” risks.
In the very beginning of the post, I read: “Quick psychology experiment”. Then, I read: “Right now, if I offered you a bet …”. Because of this, I thought about a potential real life situation, not a platonic ideal situation, that the author is offering me this bet. I declined both bets. Not because they are bad bets in an abstract world, but because I don’t trust the author in the first bet and I trust them even less in the second bet.
If you rejected the first bet and accepted the second bet, just that is enough to rule you out from having any utility function consistent with your decisions.
Under this interpretation, no it doesn’t.
Could you, the author, please modify the thought experiment to indicate that it is assumed that I completely trust the one who is proposing the bet to me? And, maybe discuss other caveats too. Or just say that it’s Omega who’s offering me the bet.
So you say humans don’t reason about the space and objects around them by keeping 3d representations. You think that instead the human brain collects a bunch of heuristics what the response should be to a 2d projection of 3d space, given different angles—an incomprehhensible mishmash of neurons like in an artificial neural network that doesn’t have any CNN layers for identifying the digit by image, and just memorizes all rules for all types of pictures with all types of angle like a fully connected layer.
I guess I was not clear enough. In your original post, you wrote “On one hand, there are countably many definitions …” and “On the other hand, Cantor’s diagonal argument applies here, too. …”. So, you talked about two statements—“On one hand, (1)”, “On the other hand, (2)”. I would expect that when someone says “One one hand, …, but on the other hand, …”, what they say in those ellipses should contradict each other. So, in my previous comment, I just wanted to point out that (2) does not contradict (1) because countable infinity + 1 is still countable infinity.
take all the iterations you need, even infinitely many of them
Could you clarify how I would construct that?
For example, what is the “next cardinality” after countable?
I didn’t say “the next cardinality”. I said “a higher cardinality”.
Ok, so let’s say you’ve been able to find a countably infinite amount of real numbers and you now call them “definable”. You apply the Cantor’s argument to generate one more number that’s not in this set (and you go from the language to the meta language when doing this). Countably infinite + 1 is still only countably infinite. How would you go to a higher cardinality of “definable” objects? I don’t see an easy way.
To check if A causes B, you can check what happens when you intervene and modify A, and also what happens when you intervene and modify B. That’s not always possible though. You can consult “Causality: Models, Reasoning, and Inference” by Pearl for more details.
They commit to not using your data to train their models without explicit permission.
I’ve just registered on their website because of this article. During registration, I was told that conversations marked by their automated system that overlooks if you are following their terms of use are regularly overlooked by humans and used to train their models.
When learning to sing, humming is used to extend your range higher. Not sure if it’s used to extend it lower.
Replied in PM.
I would like to make a recommendation to Johannes that he should try to write and post content in a way that invokes less feelings of cringe in people. I know it does invoke that because I personally feel cringe.
Still, I think that there isn’t much objectively bad about this post. I’m not saying the post is very good or convincing. I think its style is super weird but that should be considered to be okay in this community. These thoughts remind me of something Scott Alexander once wrote—that sometimes he hears someone say true but low status things—and his automatic thoughts are about how the person must be stupid to say something like that, and he has to consciously remind himself that what was said is actually true.
Also, all these thoughts about this social reality sadden me a little—why oh why is AI safety such a status-concerned and “serious business” area nowadays?
I’ve been learning to play diatonic harmonica for the last 2 years. This is my first instrument and I can confirm that learning an instrument (and music theory) is a lot of fun and it has also taught me some new things about how to learn things in general.
I hum all the time anyway.
Unless I don’t recognize the sounds. It’s like asking me to beatbox the last 5 seconds of the gurgling of a nearby river. How the fudge would I do that?
Wait, are there people who can do that?
I think that’s pretty easy :)
At first I disbelieved. I thought A > B. Then I wrote code myself and checked, and got that B > A. I believed this result. Then I thought about it and realized why my reason for A > B was wrong. But I still didn’t understand (and now I don’t understand either) why the described random process is not equivalent to randomly choosing 2, 4, or 6 every roll. I thought some more and now I have some doubts. My first doubt is whether there exists some kind of standard way of describing random processes and conditioning on them, and whether the problem as stated by notfnofn. Perhaps the problem is just underspecified? Anyway, this is very interesting.