I first tried to describe rationality piece by piece, but realized that just comes out as something like: “Enumerate all the principles, fundamentals, and ideas you can think of and find about effective thinking and action. Master all of them. More thoroughly and systematically apply them to every aspect of your life. Use the strongest to solve its most relevant problem. Find their limits. Be unsatisfied. Create new principles, fundamentals, and ideas to master. Become strong and healthy in all ways. ”
Non-meta attempt:
<Epistemic status: I would predict most of these are wrong. In fact, I rather recently proved I didn’t understand fundamental parts of The Sequences. So I know that my beliefs here are weak and thoroughly misled. So my basis of belief for all of these is broken and weak. I am certain my foundation for beliefs is wrong even if all of my actual beliefs here turn out to be basically accurate. I cannot thoroughly justify why they are right.>
General strategy: collect all the important things you think are true, and consider what it means for each to be false.
Starting with a list of the things most important to you, state the most uncontroversial and obvious facts about how those work and why that is the case. Now assume the basic facts about the things most important to you are wrong. The impossible is easy. The probable is actually not true. Your assumptions do not lead to their conclusions. The assumptions are also false. You don’t want the conclusions to be true anyway. The things that you know work, work based on principles other than what you thought. Most of your information about those phenomena is maliciously and systematically corrupted, and all of it is based on wrong thinking. Your very conceptions of the ideas are set up to distort your thinking on this subject.
What if my accepted ideas of civilizational progress are wrong? What if instead of exponential growth, you can basically just skip to the end? Moore’s Law is actually just complacency. You can, at any point, write down the most powerful and correct version of any part of civilization. You can also write down what needs to happen to get there. You can do this without actually performing any research and development in between, or even making prototypes. You don’t need an AGI to do this for you. Your brain and its contents right now are sufficient. You just need to organize them differently. In fact, you already know how to do this. You’re tripping over this ability repeatedly, overlooking the capability to solve everything you care about because you regard it as trash, some useless idea, or even a bad plan. You’ve buried it alongside the garbage of your mind. You’re not actually looking at what is in your head and how it can be used. Even if it feels like you are. Even if you’re already investing all your resources in ‘trying.’ It is possible, easy even. You’re just doing it wrong in an obvious way you refuse to recognize. Probably because you don’t actually want what you feel, think, and say you do. You already know why you’re lying to yourself about this.
You can’t build AGI without understanding what it’ll do first, so AI safety as a separate field is actually not even necessary or especially valuable. You can’t even get started with the tech that really matters until you’ve laid out what is going to happen in advance. That tech can also only be used for good ends. Also, AGI is impossible to build in the first place.
Rationality is bunk and contains more traps than valuable thinking techniques. MIRI is totally wrong about AI safety and is functionally incapable of coming anywhere close to what is necessary to align superintelligences. Even over a hundred years it will be mechanically unable to self-correct. CFAR is just very good at making you feel like rationality is being taught. They, don’t understand even the basics of rationality in the first place. Instead they’re just very good at convincing good people to give them money, and everyone including themselves that this is okay. Also, it is okay. Because morality is actually about making you feel like good things are happening, not actually making good things happen. We actually care about the symbol, not the substance.
That rationality cannot, even in its highest principles of telling you how to overcome itself, actually lead you to something better. To that higher unnamed thing which is obviously better once you’re there. There is, in fact, actually no rationality technique for making it easier to invent the next rationality. Or for uncovering the principles it is missing. Even the fact of knowing there are missing principles you must look for when your tools shatter is orthogonal to resolving the problem. It does not help you. Analogously there is no science experiment for inventing rationality. You cannot build an ad-hoc house whose shape is engineering. If it somehow happens, it will be because of something other than the principles you thought you were using. You can keep running science experiments about rationality-like-things and eventually get an interesting output, but the reason it will work is because of something like brute force random search.
That the singularity won’t happen. Exponential growth already ended. But we also won’t destroy ourselves for not being able to stop X-risk. In fact, X-risk is a laughable idea. Humans will survive no matter what happens. It is impossible to actually extinguish the species. S-risk is also crazy, it is okay for uncountable humans to suffer forever, because immense suffering is orthogonal to good/bad. What we actually value has nothing to do with humans and conscious experience at all, actually.
I first tried to describe rationality piece by piece, but realized that just comes out as something like: “Enumerate all the principles, fundamentals, and ideas you can think of and find about effective thinking and action. Master all of them. More thoroughly and systematically apply them to every aspect of your life. Use the strongest to solve its most relevant problem. Find their limits. Be unsatisfied. Create new principles, fundamentals, and ideas to master. Become strong and healthy in all ways. ”
Non-meta attempt:
<Epistemic status: I would predict most of these are wrong. In fact, I rather recently proved I didn’t understand fundamental parts of The Sequences. So I know that my beliefs here are weak and thoroughly misled. So my basis of belief for all of these is broken and weak. I am certain my foundation for beliefs is wrong even if all of my actual beliefs here turn out to be basically accurate. I cannot thoroughly justify why they are right.>
General strategy: collect all the important things you think are true, and consider what it means for each to be false.
Starting with a list of the things most important to you, state the most uncontroversial and obvious facts about how those work and why that is the case. Now assume the basic facts about the things most important to you are wrong. The impossible is easy. The probable is actually not true. Your assumptions do not lead to their conclusions. The assumptions are also false. You don’t want the conclusions to be true anyway. The things that you know work, work based on principles other than what you thought. Most of your information about those phenomena is maliciously and systematically corrupted, and all of it is based on wrong thinking. Your very conceptions of the ideas are set up to distort your thinking on this subject.
What if my accepted ideas of civilizational progress are wrong? What if instead of exponential growth, you can basically just skip to the end? Moore’s Law is actually just complacency. You can, at any point, write down the most powerful and correct version of any part of civilization. You can also write down what needs to happen to get there. You can do this without actually performing any research and development in between, or even making prototypes. You don’t need an AGI to do this for you. Your brain and its contents right now are sufficient. You just need to organize them differently. In fact, you already know how to do this. You’re tripping over this ability repeatedly, overlooking the capability to solve everything you care about because you regard it as trash, some useless idea, or even a bad plan. You’ve buried it alongside the garbage of your mind. You’re not actually looking at what is in your head and how it can be used. Even if it feels like you are. Even if you’re already investing all your resources in ‘trying.’ It is possible, easy even. You’re just doing it wrong in an obvious way you refuse to recognize. Probably because you don’t actually want what you feel, think, and say you do. You already know why you’re lying to yourself about this.
You can’t build AGI without understanding what it’ll do first, so AI safety as a separate field is actually not even necessary or especially valuable. You can’t even get started with the tech that really matters until you’ve laid out what is going to happen in advance. That tech can also only be used for good ends. Also, AGI is impossible to build in the first place. Rationality is bunk and contains more traps than valuable thinking techniques. MIRI is totally wrong about AI safety and is functionally incapable of coming anywhere close to what is necessary to align superintelligences. Even over a hundred years it will be mechanically unable to self-correct. CFAR is just very good at making you feel like rationality is being taught. They, don’t understand even the basics of rationality in the first place. Instead they’re just very good at convincing good people to give them money, and everyone including themselves that this is okay. Also, it is okay. Because morality is actually about making you feel like good things are happening, not actually making good things happen. We actually care about the symbol, not the substance.
That rationality cannot, even in its highest principles of telling you how to overcome itself, actually lead you to something better. To that higher unnamed thing which is obviously better once you’re there. There is, in fact, actually no rationality technique for making it easier to invent the next rationality. Or for uncovering the principles it is missing. Even the fact of knowing there are missing principles you must look for when your tools shatter is orthogonal to resolving the problem. It does not help you. Analogously there is no science experiment for inventing rationality. You cannot build an ad-hoc house whose shape is engineering. If it somehow happens, it will be because of something other than the principles you thought you were using. You can keep running science experiments about rationality-like-things and eventually get an interesting output, but the reason it will work is because of something like brute force random search.
That the singularity won’t happen. Exponential growth already ended. But we also won’t destroy ourselves for not being able to stop X-risk. In fact, X-risk is a laughable idea. Humans will survive no matter what happens. It is impossible to actually extinguish the species. S-risk is also crazy, it is okay for uncountable humans to suffer forever, because immense suffering is orthogonal to good/bad. What we actually value has nothing to do with humans and conscious experience at all, actually.