If you know the person you’re talking to well, have a shared understanding of what is meant by the phrase, and have a strong sense that you are working together and it won’t be taken negatively, then I can see this working out. But I think there are a lot of situations that lack this shared context and that are more adversarial. It seems to me like you can often get across similar info in a more specific way. For example, if you think someone is rationalizing you can focus on the underlying issue and hope to “shake them out of it” by walking through the logic of the issue, or you could identify a more specific “meta-issue” if you want to go to the meta-level. That would depend on exactly how they are “rationalizing”, although again if you have a strong common understanding of what “truth-seeking” means, perhaps that is the best way to describe the meta-issue in your case.
TFD
Ex-OpenAI employee amici leave to file denied in Musk v OpenAI case?
I’d go with “I don’t think this conversation is helping us ________”
I think this can be fine, especially if in a context where you know the person well and you are working together on something explicitly (e.g. coworker), but I think this often won’t work in context where you don’t know the person well or that are more adversarial.
If you’re having a debate that’s mostly for an external audience, then maybe you should just call out that the liar is lying.
Sometimes you should be I think its totally possible you shouldn’t (and I think you should have a very high bar for doing so).
If you’re trying to work with them, then it’s probably better to try to figure out what aspect of the conversation is motivating them to lie and trying to incentivise telling the truth instead. If you can’t do that then it doesn’t really matter what’s going on in their head; you’re not going to have a productive conversation anyway.
This will obviously depend on the situation, but I think its totally possible that you can have a productive conversation even when someone is lying, you don’t call them out, and you can’t motivate them to tell the truth. It just depends what counts as “productive” from your perspective. That should depend primarily on your goals for the conversation, not some cosmic principle about “truth-seeking”. If I’m trying to buy a car and the salesperson lies to me about how the one I’m looking at is surely going to be sold today, I can just keep that to myself and use my knowledge to my own advantage, I don’t have to try to make the salesperson more honest.
same way as if you said “your methodology here is flawed in X, Y, Z ways” regardless of agreement with conclusion?
This raises an interesting issue. A meta-argument is an argument about an argument. Arguments about methodology are indeed “meta”, but they are “meta” in a different way than “you are insufficiently truth-seeking” is. You can argue about methodology because you think certain methodologies are more reliable, and thus there is still a strong connection to the object level. Thus I would be tempted to call this a meta-object-argument, a meta-argument who’s purpose is to help address the object level.
“You are insufficiently truth-seeking”, can be meta in the sense that it raises a higher order issue, that the person in the discussion is doing something bad. We might call it a meta-meta-argument, a meta-argument who’s purpose is to address a meta-level issue.
Aren’t most statements like this wanting to be on the meta level
I think in many cases it’s genuinely unclear. For reasons of humor my post kind of implies a fairly adversarial framing, where it often is going to be the case that people are intending to “go meta”, but I think you can invoke the idea of truth-seeking in a much softer way as well, were its not necessarily intended to be “meta” but often will end up getting you there anyway.
Even when you are fully intending to “go meta”, I believe my advice still applies.
What about when they say, “you’re strawmanning me!”
Depends, were you strawmanning them? If so, say “you’re right, I was strawmanning”, if not say “no I’m not”.
More seriously, it depends what direction you want to take the discussion. I think it comes down to the same bifurcation I identify above (going vs not going meta). My “hot take” is that often you don’t actually want to emphasize the meta-issues you have with the other person, even if you think they’re doing bad stuff.
So, in your example, I think you can break it down like this:
Is the most important thing to you to highlight what you believe are the argumentative “moves” they are making? If so, do it, but make sure you really have a solid argument. Evaluate the strength of your argument as best you can based on what an objective, outside observer would think is strong, not just that you are really sure they are doing what you think they are doing.
If you aren’t going to fully “go meta” like in #1, which position do you want to try to get them to commit to?
If the new one, just accept their change, ask clarifying questions to lock it in, and go from there.
If the old one, it depends how they have framed it. You might have to slightly “go meta” by saying something like “I feel like you’ve shifted your position a bit”, or something of that nature.
Don’t accuse your interlocutor of being insufficiently truth-seeking
In the context of AI safety views that are less correlated/more independent, I would personally bump the GDM work related to causality. I think GDM is the only major AI-related organization I can think of that seems to have a critical mass of interest in this line of research. A bit different since its not a full-on framework for addressing AGI, but I think it is a different (and in my view under-appreciated) line of work that has a different perspective and draws and different concepts/fields than a lot of other approaches.
I think I am, all things considered, sad about this. I think libel suits are really bad tools for limiting speech, and I declined being involved with them when some of the plaintiffs offered me to be involved on behalf of LW and Lightcone.
Appreciate you saying this. It raises my esteem for LW/Lightcone to hear that this is the route that you all choose. Perhaps that doesn’t mean much since I largely agree with the view you express about defamation suits, but even for those that disagree, I think there is something to admire here in terms of sticking to principles even when it’s people you strongly disagree with how are benefiting from those principles in a particular case.
I know I’ve responded to a lot of your comments, and I get the sense you don’t want to keep engaging with me, so I’ll try to keep it brief.
We both agree that details matter, and I think the details of what the actual problem is matter. If, at bottom, the thing that Epoch/these individuals have done wrong is recklessly accelerate AI, I think you should have just said that up top. Why all the “burn the commons”, “sharing information freely”, “damaging to trust” stuff? It seems like you’re saying at the end of the day, those things aren’t really the thing you have a problem with. On the other hand, I think invoking that stuff is leading you to consider approaches that won’t necessarily help with avoiding reckless acceleration, as I hope my OpenAI example demonstrates.
I think its not all that uncommon for people who are highly competent in their current role to be passed over for promotion to leadership. LeBron James isn’t guaranteed to job as the MBA commissioner just because he balls hard. Things like “avoid[ing] negative-EV projects” would be prime candidates for something like this. If you’re amazing at executing technical work on your assigned projects but aren’t as good at prioritizing projects or coming up with good ideas for projects, then I could definitely see that blocking a move to leadership even if you’re considered insanely competent technically.
I largely agree with the underlying point here, but I don’t think its quite correct that something like this only applies in specific professions. For example, I think every major company is going to expect employees to be careful about revealing internal info, and there are norms that apply more broadly (trade secrets, insider trading etc.).
As far as I can tell though, those are all highly dissimilar to this scenario because they involve an existing widespread expectation of not using information in a certain way. Its not even clear to me in this case what information was used in what way that is allegedly bad.
I just think it’s really bad if people feel that they can’t speak relatively freely with the forecasting organisations because they’ll misuse the information.
To “misuse” to me implies taking a bad action. Can you explain what misuse occurred here? If we assume that people at OpenAI now feel less able to speak freely after things that ex-OpenAI employees have said/done would you likewise characterize those people as having “misused” information or experience they gained at OpenAI? I understand you don’t have fully formed solutions and that’s completely understandable, but I think my questions go to a much more fundamentally issue about what the underlying problem actually is. I agree it is worth discussing, but I think it would clarify the discussion to understand what the intent of such a norm would (and if achieving that intent would in fact be desirable).
(This is distinct from my separate point about it being a mistake to hire folk who do things like this. It is a mistake to have hired folks who act strongly against your interests even if they don’t break any ethical injuctions)
If Coca-Cola hires someone who later leaves and goes to work for Pepsi because Pepsi offered them higher compensation, I’m not sure it would make sense for Coca-Cola to conclude that they should make big changes to their hiring process, other than perhaps increasing their own compensation if they determine that is a systematic issue. Coca-Cola probably needs to accept that “its not personal” is sometimes going to be the natural of the situation. Obviously details matter, so maybe this case is different, but I think working in an environment where you need to cooperate with other people/institutions means you also have to sometimes accept that people you work with will make decisions based on their own judgements and interests, and therefore may do things you don’t necessarily agree with.
(You could say “disempowerment which is gradual” for clarity.)
I feel like there is a risk of this leading to a never-ending sequence of meta-communication concerns. For instance, what if a reader interprets “gradual” to mean taking more than 10 years, but the writer thought 5 would be sufficient for “gradual” (and see timelines discussions around stuff like continuity for how this keeps going). Or if the reader assumes “disempowerment” means complete disempowerment, but writer only meant some unspecified “significant amount” of disempowerment. Its definitely worthwhile to try to be clear initially, but I think we also have to accept that clarification may need to happen “on the backend” sometimes. This seems like a case where one could simply clarify they have a different understanding compared to the paper. In fact, Its not all that clear to me that people won’t implicitly translate “disempowerment which is gradual” to “gradual disempowerment”. It could be that the paper stands in just as much for the concept as for the literal words in people’s minds.
But this only works if those less worried about AI risks who join such a collaboration don’t use the knowledge they gain to cash in on the AI boom in an acceleratory way.
Can you state more specifically what the alleged bad actions are here? Based on some of the discussions under your post about professional norms surrounding information disclosure, I think it is worth distinguishing two cases.
First, consider a norm that limits the disclosure of some relatively specific and circumscribed pieces of information, such as a doctor not being allowed to reveal personal health information of patients outside of what is needed to provide care.
Second, a general norm that if you cooperate with someone and they provide you some info, you won’t use that info contrary to their interests. Its not 100% clear to me, but your post sounds a lot like this second one.
I think the second scenario raises a lot of issues. Its seems challenging to enforce, hard to understand and navigate, costly for people to attempt to conform to, and potentially counterproductive for what seems to be your goal. You are considering a specific case at a specific point in time, but I don’t think that gives the full picture of the impact of such a norm. For example, consider ex-OpenAI employees who left due to concerns about AI safety. Should the expectation be that they only use information and experience they gained at OpenAI in a way that OpenAI would approve of?
Now, if Epoch and/or specific individuals made commitments that they violated, that might be more like the first case, but its not clear that is what happened here. If it is, more explanation of how this is the case would be helpful, I think.
11 former OpenAI employees filed an amicus brief in the Musk vs. Altman lawsuit
If I’m reading the docket correctly, first amendment expert Eugene Volokh has entered an appearance on behalf of the ex-OpenAI amici. I don’t want to read too much into that, but it is interesting to me in light of the information about OpenAI employees and NDAs that a first amendment expert is working with them.
A group of former OpenAI employees filed a proposed amicus brief in support of Musk’s lawsuit on the future of OpenAI’s for-profit transition. Meanwhile, OpenAI countersued Elon Musk.
I think this is the first time that the charter has been significantly highlighted in this case. My own personal view is the charter is one of the worst documents for OpenAI (therefore good for Musk), and having their own employees stating that it was emphasized a lot and treated as binding are very bad facts for OpenAI and associated defendants. The timeline for all this stuff isn’t 100% clear to me, so I can imagine there being issues with whether the charter was timed such that it is relevant for Musk’s own reliance, but the vibes of this for OpenAI are horrendous. Also raises an interesting possibility of whether the “merge-and-assist” part of the charter might be enforceable.
The docket seems to indicate that Eugene Volokh is representing the ex-OpenAI amici (in addition to Lawrence Lessig). To my understanding Volokh is a first amendment expert and has also done work on transparency in courts. The motion for leave to file also indicate that OpenAI isn’t necessarily on board with the brief being filed. I wonder if they are possibly going to argue their ex-employees shouldn’t be allowed to do what their doing (perhaps trying to enforce NDAs?), and Volokh is perhaps planning to weigh in on that issue?
Strongly agree that causality is a useful and under-estimated perspective in AI discussions. I’ve found the literature on invariant causal prediction, helpful for understanding distributional generalization, especially some of the results in this paper. One of the papers you reference above puts the core issue well:
generalization guarantees must build on knowledge or assumptions on the “relatedness” of different training and testing domains
Based on the way I sometimes see the generalization of LLM or other models being discussed, its not clear to me that this idea has really been internalized by the more ML/engineering focused segments of the AI community.
As for lesswrong specifically, I think the prevalence of beliefs about LDT/FDT/newcomb-like problems impacts the way causality is sometimes discussed in an interesting way. I think both there would be great benefit in both wider AI circles and lesswrong in particular seeing increased discussion of causality.
I think its worth thinking about what you are trying to achieve in any given discussion. Why do you need the person to acknowledge what you believe is their true interest? I do think people often describe there interests as being about finding or demonstrating “the truth” for a lot of topics, but to me there is a large possibility that going down that road mostly gets into semantics.
I’m not entirely sure I understand your point regarding racism/sexism, but I can imagine something like this. Someone has a belief that is considered racist. When confronted about why they talk about this belief so much, they say its because they think its true and its super important to stand up for the truth. My view is, often the person probably does genuinely believe the thing is true, but their degree of focus does come partially from what you say, the desire to have their belief not considered racist. Which one is the “real” reason? I think its kind of hopelessly entangled. They do believe the thing is true but its not like they are going around being really concerned about telling people the sky is blue, even though they also believe that is true. Often times if you are in a discussion related to this you can focus on whether the belief in question is or is not true, whether it is or is not racist, that kind of thing. I think you will often get more productive discussions this way than to go into what the individual person’s “real reasons” are, unless you have an interest in them specifically (like they are your friend and you really want convince them of something about the topic). Even in that case, its not clear you can really “untangle” the reasons, but you might want to go more into their psychology if you care about them specifically.