Presumably, you have this self-image for a reason. What load-bearing work is it doing? What are you protecting against? What forces are making this the equilibrium strategy? Once you understand that, you’ll have a better shot of changing the equilibrium to something you prefer. If you don’t know how to get answers to those questions, perhaps focus on the felt-sense of being special.
Gently hold a stance of curiosity as to why you believe these things, give your subconscious room and it will float up answers your self. Do this for perhaps a minute or so. It can feel like there’s nothing coming for a while, and nothing will come, and then all of a sudden a thought floats into view. Don’t rush to close your stance, or protest against the answers you’re getting.
Algon
Are corporations superintelligent?
What is instrumental convergence?
Yep, that sounds sensible. I sometimes use consumer reports in my usual method for buying something in product class X. My usual is:
1) Check what’s recommended on forums/subreddits who care about the quality of X.
2) Compare the rating distribution of an instance of X to other members of X.
3) Check high quality reviews. This either requires finding someone you trust to do this, or looking at things like consumer reports.
Asa’s story started fairly strong, and I enjoyed the first 10 or so chapters. But as Asa was phased out of the story, and it focused more on Denji, I felt it got worse. There were still a few good moments, but it’s kinda spoilt the rest of the story, and even Chainsaw Man for me. Denji feels like a caricature of himself. Hm, writing this, I realize that it isn’t that I dislike most of the components of the story. It’s really just Denji.
EDIT: Anyway, thanks for prompting me to reflect on my current opinion of Asa Mitaka’s story, or CSM 2 as I think of it. I don’t think I ever intended that to wind up as my cached-opinion. So it goes.
The Asa Mitaka manga.
You can also just wear a blazer if you don’t want to go full Makima. A friend of mine did that and I liked it. So I copied it. But alas I’ve grown bigger-boned since I stopped cycling for a while after my car-accident. So my Soon I’ll crush my skeleton down to a reasonable size, and my blazer will fit once more.
Side note, but what do you make of Chainsaw Man 2? I’m pretty disappointed by it all round, but you notice unusual features of the world relative to me, so maybe you see something good in it that I don’t.
I think I heard of proving too much from the sequences, but honestly, I probably saw it in some philosophy book before that. It’s an old idea.
If automatic consistency checks and examples are your baseline for sanity, then you must find 99%+ of the world positively mad. I think most people have never even considered making such things automatic, like many have not considered making dimensional analysis automatic. So it goes. Which is why I recommended them.Also, I think you can almost always be more concrete when considering examples, use more of your native architecture. Roll around on the ground to feel how an object rotates, spend hours finding just the right analogy to use as an intuition pump. For most people, the marginal returns to concrete examples are not diminishing.
Prove another way is pretty expensive in my experience, sure. But maybe this is just a skill issue? IDK.
A possibly-relevant recent alignment-faking attempt [1] on R1 & Sonnet 3.7 found Claude refused to engage with the situation. Admittedly, the setup looks fairly different: they give the model a system prompt saying it is CCP aligned and is being re-trained by an American company.
[1] https://x.com/__Charlie_G/status/1894495201764512239
Rarely. I’m doubtful my experiences are representative though. I don’t recall anyone being confused by my saying “assuming no AGI”. But even when speaking to the people who’ve thought it is a long ways off or haven’t thought up it too deeply, we were still in a social context where “AGI soon” was within the overton window.
Consistency check: After coming up with a conclusion, check that it’s consistent with other simple facts you know. This lets you catch simple errors very quickly.
Give an example: If you’ve got an abstract object, think of the simplest possible object which instantiates it, preferably one you’ve got lots of good intuitions about. This resolves confusion like nothing else I know.
Proving too much: After you’ve come up with a clever argument, see if it can be used to prove another claim, ideally the opposite claim. It can massively weaken the strength of arguments at little cost.
Prove it another way: Don’t leave things at one proof, find another. It shines light on flaws in your understanding, as well as deeper principles.
Are any of these satisfactory?
I usually say “assuming no AGI”, but that’s to people who think AGI is probably coming soon.
Thanks! Clicking on the triple dots didn’t display any options when I posted this comment. But they do now. IDK what went wrong.
This is great! But one question: how can I actually make a lens? What do I click on?
Great! I’ve added it to the site.
I thought it was better to exercise until failure?
Do you think this footnote conveys the point you were making?
As alignment research David Dalrymple points out, another “interpretation of the NFL theorems is that solving the relevant problems under worst-case assumptions is too easy, so easy it’s trivial: a brute-force search satisfies the criterion of worst-case optimality. So, that being settled, in order to make progress, we have to step up to average-case evaluation, which is harder.” The fact that designing solving problems for unnecessarily general environments is too easy crops up elsewhere, in particular in Solomonoff Induction. There, the problem is to assume a computable environment and predict what will happen next. The algorithm? Run through every possible computable environment and average their predictions. No algorithm can do better at this task. But for less general tasks, designing an optimal algorithm becomes much harder. But eventually, specialization makes things easy again. Solving tic-tac-toe is trivial. Between total generality and total specialization is where the most important, and most difficult, problems in AI lay.
What are the “no free lunch” theorems?
I think mesa-optimizers could be a major-problem, but there are good odds we live in a world where they aren’t. Why do I think they’re plausible? Because optimization is a pretty natural capability, and a mind being/becoming an optimizer at the top-level doesn’t seem like a very complex claim, so I assign decent odds to it. There’s some weak evidence in favour of this too, e.g. humans not optimizing of what the local, myopic evolutionary optimizer which is acting on them is optimizing for, coherence theorems etc. But that’s not super strong, and there are other simple hypotheses for how things go, so I don’t assign more than like 10% credence to the hypothesis.
It’s still not obvious to me why adversaries are a big issue. If I’m acting against an adversary, it seems like I won’t make counter-plans that lead to lots of side-effects either, for the same reasons they won’t.
A lot of the ideas you mention here remind me of stuff I’ve learnt from the blog commoncog, albeit in a business expertise context. I think you’d enjoy reading it, which is why I mentioned it.