I did in fact have something between those two in mind, and was even ready to defend it, but then I basically remembered that LW is status-crazy and and gave up on fighting that uphill battle. Kudos to alkjash for the fighting spirit.
SquirrelInHell
They explicitly said that he’s not wrong-on-many-things in the T framework, the same way Eliezer is T.correct.
Frustrating, that’s not what I said! Rule 10: be precise in your speech, Rule 10b: be precise in your reading and listening :P My wording was quite purposeful:
I don’t think you can safely say Peterson is “technically wrong” about anything
I think Raemon read my comments the way I intended them. I hoped to push on a frame in people seem to be (according to my private, unjustified, wanton opinion) obviously too stuck in. See also my reply below.
I’m sorry if my phrasing seemed conflict-y to you. I think the fact that Eliezer has high status in the community and Peterson has low status is making people stupid about this issue, and this makes me write in a certain style in which I sort of intend to push on status because that’s what I think is actually stopping people from thinking here.
Cool examples, thanks! Yeah, these are issues outside of his cognitive expertise and it’s quite clear that he’s getting them wrong.
Note that I never said that Peterson isn’t making mistakes (I’m quite careful with my wording!). I said that his truth-seeking power is in the same weight class, but obviously he has a different kind of power than LW-style. E.g. he’s less able to deal with cognitive bias.
But if you are doing “fact-checking” in LW style, you are mostly accusing him of getting things wrong about which he never cared in the first place.
Like when Eliezer is using phlogiston as an example in the Sequences and gets the historical facts wrong. But that doesn’t make Eliezer wrong in any meaningful sense, because that’s not what he was talking about.
There’s some basic courtesy in listening to someone’s message, not words.
This story is trash and so am I.
If people don’t want to see this on LW I can delete it.
You are showcasing a certain unproductive mental pattern, for which there’s a simple cure. Repeat after me:
This is my mud pile
I show it with a smile
And this is my face
It also has its place
For increased effect, repeat 5 times in rap style.
[Please delete this thread if you think this is getting out of hand. Because it might :)]
I’m not really going to change my mind on the basis of just your own authority backing Peterson’s authority.
See right here, you haven’t listened. What I’m saying is that there is some fairly objective quality which I called “truth-seeking juice” about people like Peterson, Eliezer and Scott which you can evaluate by yourself. But you are just dug yourself into the same trap a little bit more. From what you write, your heuristics for evaluating sources seem to be a combination of authority and fact-checking isolated pieces (regardless of how much you understand the whole picture). Those are really bad heuristics!
The only reason why Eliezer and Scott seem trustworthy to you is that their big picture is similar to your default, so what they say is automatically parsed as true/sensible. They make tons of mistakes and might fairly be called “technically wrong on many things”. And yet you don’t care because you when you feel their big picture is right, those mistakes feel to you like not-really-mistakes.
Here’s an example of someone who doesn’t automatically get Eliezer’s big picture, and thinks very sensibly from their own perspective:
On a charitable interpretation of pop Bayesianism, its message is:
Everyone needs to understand basic probability theory!
That is a sentiment I agree with violently. I think most people could understand probability, and it should be taught in high school. It’s not really difficult, and it’s incredibly valuable. For instance, many public policy issues can’t properly be understood without probability theory.
Unfortunately, if this is the pop Bayesians’ agenda, they aren’t going at it right. They preach almost exclusively a formula called Bayes’ Rule. (The start of Julia Galef’s video features it in neon.) That is not a good way to teach probability.
What about if you go read that, and try to mentally swap places. The degree to which Chapman doesn’t get Eliezer’s big picture is probably similar to the degree to which you don’t get Peterson’s big picture, with similar results.
[Note: somewhat taking you up on the Crocker’s rules]
Peterson’s truth-seeking and data-processing juice is in super-heavy weight class, comparable to Eliezer etc. Please don’t make the mistake of lightly saying he’s “wrong on many things”.
At the level of analysis in your post and the linked Medium article, I don’t think you can safely say Peterson is “technically wrong” about anything; it’s overwhelmingly more likely you just didn’t understand what he means. [it’s possible to make more case-specific arguments here but I think the outside view meta-rationality should be enough...]
4) The skill to produce great math and skill to produce great philosophy are secretly the same thing. Many people in either field do not have this skill and are not interested in the other field, but the people who shape the fields do.
FWIW I have reasonably strong but not-easily-transferable evidence for this, based on observation of how people manipulate abstract concepts in various disciplines. Using this lens, math, philosophy, theoretical computer science, theoretical physics, all meta disciplines, epistemic rationality, etc. form a cluster in which math is a central node, and philosophy is unusually close to math even considered in the context of the cluster.
Note that this is (by far) the least incentive-skewing from all (publicly advertised) funding channels that I know of.
Apply especially if all of 1), 2) and 3) hold:
1) you want to solve AI alignment
2) you think your cognition is pwned by Moloch
3) but you wish it wasn’t
tl;dr: your brain hallucinates sensory experiences that have no correspondence to reality. Noticing and articulating these “felt senses” gives you access to the deep wisdom of your soul.
I think this snark makes it clear that you lack gears in your model of how focusing works. There are actual muscles in your actual body that get tense as a result of stuff going on with your nervous system, and many people can feel that even if they don’t know exactly what they are feeling.
[Note that I am in no way an expert on strategy, probably not up to date with the discourse, and haven’t thought this through. I also don’t disagree with your conclusions much.]
[Also note that I have a mild feeling that you engage with a somewhat strawman version of the fast-takeoff line of reasoning, but have trouble articulating why that is the case. I’m not satisfied with what I write below either.]
These possible arguments seem not included in your list. (I don’t necessarily think they are good arguments. Just mentioning whatever intuitively seems like it could come into play.)
Idiosyncrasy of recursion. There might be a qualitative difference between universality across economically-incentivized human-like domains, and universality extended to self-improvement from the point of view of a self-improving AI, rather than human-like work on AI. In this case recursive self-improvement looks more like a side effect than mainstream linear progress.
Actual secrecy. Some group might actually pull off being significantly ahead and protecting their information from leaking. There are incentives to do this. Related: Returns to non-scale. Some technologies might be easier to develop by a small or medium sized well-coordinated group, rather than a global/national ecosystem. This means there’s a selection effect for groups which stay somewhat isolated from the broader economy, until significantly ahead.
Non-technological cruxes. The ability to extract high quality AI research from humans is upstream of technological development, and an early foom loop might route through a particular configuration of researcher brains and workflow. However, humans are not fungible and there might be strange non-linear progress achieved by this. This consideration seems historically more important for projects that really push the limits of human capability, and an AGI seems like such a project.
Nash equilibria. The broader economy might random-walk itself into a balance of AI technologies which actively hinders optimizing for universality, e.g. by producing only certain kinds of hardware. This means it’s not enough to argue that at some point researchers will realize the importance of AGI, but you have to argue they will realize this before the technological/economic lock-in occurs.
I think it’s perfectly valid to informally say “gears” while meaning both “gears” (how clear a model is on what it predicts) and “meta-gears” (how clear the meta model is on which models it a priori expects to be correct). And the new clarity you bring to this would probably be the right time to re-draw the boundaries around gears-ness, to make it match the structure of reality better. But this is just a suggestion.
[excellent, odds ratio 3:2 for worth checking LW2.0 sometimes and 4:3 for LW2.0 will succeed]
I think “Determinism and Reconstructability” are great concepts but you picked terrible names for them, and I’ll probably call them “gears” and “meta-gears” or something short like that.
This article made me realize that my cognition runs on something equivalent to logical inductors, and what I recently wrote on Be Well Tuned about cognitive strategies is a reasonable attempt at explaining how to implement logical inductors in a human brain.
Request: Has this idea already been explicitely stated elsewhere? Anything else regular old TAPs are missing?
It’s certainly not very new, but nothing wrong with telling people about your TAP modifications. There are many nuances to using TAPs in practice, and ultimately everyone figures out their own style anyway. Whether you have noticed or not, you probably already have this meta-TAP:
“TAPs not working as I imagined → think how to improve TAPs”
It is, ultimately, the only TAP you need to successfully install to start the process of recursive improvement.
I have the suspicion that everyone is secretly a master at Inner Sim
There’s a crucial difference here between:
good “secretly”: I’m so good at it it’s my second nature, and there’s little reason to bring it up anymore
bad “secretly”: I’m not noticing what I’m doing, so I can’t optimize it, and never have
One example is that the top tiers of the community are in fact composed largely of people who directly care about doing good things for the world, and this (surprise!) comes together with being extremely good at telling who’s faking it. So in fact you won’t be socially respected above a certain level until you optimize hard for altruistic goals.
Another example is that whatever your goals are, in the long run you’ll do better if you first become smart, rich, knowledgeable about AI, sign up for cryonics, prevent the world from ending etc.
if people really wanted to optimize for social status in the rationality community there is one easiest canonical way to do this: get good at rationality.
I think this is false: even if your final goal is to optimize for social status in the community, real rationality would still force you to locally give it up because of convergent instrumental goals. There is in fact a significant first order difference.
I realized today that UDT doesn’t really need the assumption that other players use UDT.
Was there ever such an assumption? I recall a formulation in which the possible “worlds” include everything that feeds into the decision algorithm, and it doesn’t matter if there are any games and/or other players inside of those worlds (their treatment is the same, as are corresponding reasons for using UDT).
You’d reap the benefits of being pubicly wrong
Bad typo.
By the way—did I mention that inventing the word “hammertime” was epic, and that now you might just as well retire because there’s no way to compete against your former glory.
I think this comment is 100% right despite being perhaps maybe somewhat way too modest. It’s more useful to think of sapience as introducing a delta on behavior, rather than a way to execute desired behavior. The second is a classic Straw Vulcan failure mode.
This is what the whole discussion is about. You are setting boundaries that are convenient for you, and refuse to think further. But some people in that reference class you are now denigrating as a whole are different from others. Some actually know their stuff and are not charlatans. Throwing a tantrum about it doesn’t change it.