Relevance of prior Theoretical ML work to alignment, research on obfuscation in theoretical cryptography as it relates to interpretability, theory underlying various phenomena such as grokking. Disclaimer: This list is very partial and just thrown together.
YafahEdelman
Hm, yeah that seems like a relevant and important distinction.
I think I was envisioning profoundness as humans can observe it to be primarily an aesthetic property, so I’m not sure I buy the concept of “actually” profoundness, though I don’t have a confident opinion about this.
I think on the margin new alignment researchers should be more likely to work on ideas that seem less deep then they currently seem to me to be.
Working on a wide variety of deep ideas does sound better to me than working on a narrow set of them.
If something seems deep, it touches on stuff that’s important and general, which we would expect to be important for alignment.
The specific scenario I talk about in the paragraph you’re responding too is one where everything except for the sense of deepness is the same for both ideas, such that someone who doesn’t have a sense of what ideas are deep or profound would find the ideas basically equivalent. In such a scenario my argument is that we should expect the deep idea to receive a more attention, despite their not existing legible or well grounded reasons for this. Some amount of preference for the deep idea might be justifiable on the grounds of trusting intuitive insight, but I don’t think the record of intuitive insight as to what ideas are good is actually very impressive—there are a huge amount of ideas that didn’t work out that sounded deep (see some philosophy, psychoanalysis, ect.) and very few that did work out[1].
try to recover from the sense of deepness some pointers at what seemed deep in the research projects
I think on the margin new theoretical alignment researchers should do less of this, as I think most deep sounding ideas just genuinely aren’t very productive to research and aren’t amenable to being proven to be unproductive to work on—often times the only evidence that a deep idea isn’t productive to work on is that nothing concrete has come of it yet.
- ^
I don’t have empirical analysis showing this—I would probably gesture to various prior alignment research projects to support this if I had to, though I worry that would devolve into arguing about what ‘success’ meant.
- ^
Against Deep Ideas
I think I agree with this in many cases but am skeptical of such a norm when the requests are related to criticism of the post or arguments as to why a claim it makes is wrong. I think I agree that the specific request to not respond shouldn’t ideally make someone more likely to respond to the rest of the post, but I think that neither should it make someone less likely to respond.
I’ve tried this for a couple of examples and it performed just as well. Additionally it didn’t seem to be suggesting real examples when I asked it what specific prompts and completion examples Gary Marcus had made.
I also think the priors of people following the evolution of GPT should be that these examples will no longer break GPT, as occurred with prior examples. While it’s possible this time will be different, I think automatic strong skepticism without evidence is rather unwarranted.
Addendum: I also am skeptical of the idea that OpenAI put much effort into fixing the specific criticisms of Gary Marcus, as I suspect his criticisms do not seem particularly important to them, but proving this sounds difficult.
I think there are a number of ways in which talking might be good given that one is right about there being obstacles—one that appeals to me in particular is the increased tractability of misuse arising from the relevant obstacles.
[Edit: *relevant obstacles I have in mind. (I’m trying to be vague here)]
Forget about what the social consensus is. If you have technical understanding of current AIs, do you truly believe there are any major obstacles left? The kind of problems that AGI companies could reliably not tear down with their resources? If you do, state so in the comments, but please do not state what those obstacles are.
I think this request, absent a really strong compelling argument that is spelled out, creates an unhealthy epistemic environment. It is possible that you think this is false or that it’s worth the cost, but you don’t really argue for either in this post. You encourage people to question others and not trust blindly in other parts of the post, but this portion expects people to not elaborate on their opinions without an explanation as to why. You repeat this again by saying “So our message is: things are worse than what is described in the post!” without justifying yourselves or, imo, properly conveying the level of caution people should be treating such an unsubstantiated claim.
I’m tempted to write a post replying with why I think there are obstacles to AGI, what broadly they are with a few examples, and why it’s important to discuss them. (I’m not going to do so atm because it’s late and I know better then to publically share something that people implied to me is infohazaradous without carefully thinking it over (and discussing doing so with friends as well).)
(I’m also happy to post it as a comment here instead but assume you would prefer not and this is your post to moderate.)
Okay, a few things:
They’re more likely to be right than I am, or we’re “equally right” or something
I don’t think this so much as I think that a new person to lesswrong shouldn’t assume you are more likely to be right then they are, without evidence.
The norms can be evaluated extremely easily on their own; they’re not “claims” in the sense that they need rigorous evidence to back them up. You can just … look, and see that these are, on the whole, some very basic, very simple, very straightforward, and pretty self-evidently useful guidelines.
Strongly disagree. They don’t seem easy to evaluate to me, they don’t seem straightforward, and most of all they don’t seem self-evidently useful. (I admit, someone telling me something I don’t understand is self-evident is a pet peeve of mine).
I suppose one could be like “has Duncan REALLY proven that Julia Galef et al speak this way?” but I note that in over 150 comments (including a good amount of disagreement) basically nobody has raised that hypothesis. In addition to the overall popularity of the list, nobody’s been like, “nuh-uh, those people aren’t good communicators!” or “nuh-uh, those good communicators’ speech is not well-modeled by this!”
I personally have had negative experiences with communicating with someone on this list. I don’t particularly think I’m comfortable hashing it out in public, though you can dm me if you’re that curious. Ultimately I don’t think it matters—however many impressive great communicators are on that list—I don’t feel willing to take their word (or well, your word about their words) that these norms are good unless I’m actually convinced myself.
Edit to add: I’d be good with standards, I just am not a fan of this particular way of pushing-for/implementing them.
So far as I can tell, the actual claim you’re making in the post is a pretty strong one , and I agree that if you believe that you shouldn’t represet your opinion as weaker than it is. However, I don’t think the post provides much evidence to support the rather strong strong claim it makes. You say that the guidelines are:
much closer to being something like an objectively correct description of How To Do It Right than they are to a mere random user’s personal opinion
and I think this might be true, but it would be a mistake for a random user, possibly new to this site, to accept your description over their own based on the evidence you provide. I worry that some will regardless given the ~declarative way your post seems to be framed.
I feel uncomfortable with this post’s framing. It feels like someone went into a garden I spend my time in and unilaterally put up a sign with a list of guidelines people should follow in the garden, with no ability to enforce these. I know that I can choose on my own whether or not to follow these guidelines, based on whether I think they are good ideas, but newcomers to the garden will see the sign and assume they have to follow them. I would have vastly preferred that the sign instead say “I personally think these norms would be neat, here’s why.”
(to clarify: the garden = lesswrong/the rationalist community. the sign = this post)
I think that if humans with AI advisors are approximately as competent as pure AI in terms of pure capabilities, I would expect that humans with AI advisors would outcompete the pure AI in practice given that the humans appear more aligned and less likely to be dangerous then pure AI—a significant competitive advantage in a lot of power seeking scenarios where gaining the trust of other agents is important.
Could you clarify what egregores you meant when you said:
The egregores that are dominating mainstream culture and the global world situation
Is it fair to say that organizations, movements, polities, and communities are all egregores?
What exactly is an egregore?
I think point (2) of this argument either means something weaker then it needs to for this rest of the argument to go through or is just straightforwardly wrong.
If OpenAI released a weakly general (but non-singularity inducing) GPT5 tomorrow, it would pretty quickly have significant effects on people’s everyday lives. Programmers would vaguely described a new feature and the AI would implement it, AIs would polish any writing I do, I would stop using google to research things and instead just chat with the AI and have it explain such-and-such paper I need for my work. In their spare time people would read custom books (or watch custom movies) tailored to their extremely niche interests. This would have a significant impact on the everyday lives of people within a month.
It seems concievable that somehow the “socio-economic benefits” wouldn’t be as significant that quickly—I don’t really know what “socio-economic benefits” are exactly.
However, the rest of your post seems to treat point (2) as proving that there would be no upside from a more powerful AI being released sooner. This feels like a case of a fancy clever theory confusing an obvious reality: better AI would impact a lot of people very quickly.