I suppose I have two questions which naturally come to mind here:
Given Nate’s comment: “This change is in large part an enshrinement of the status quo. Malo’s been doing a fine job running MIRI day-to-day for many many years (including feats like acquiring a rural residence for all staff who wanted to avoid cities during COVID, and getting that venue running smoothly). In recent years, morale has been low and I, at least, haven’t seen many hopeful paths before us.” (Bold emphases are mine). Do you see the first bold sentence as being in conflict with the second, at all? If morale is low, why do you see that as an indicator that the status quo should remain in place?
Why do you see communications as being as decoupled (rather, either that it is inherently or that it should be) from research as you currently do?
Given Nate’s comment: “This change is in large part an enshrinement of the status quo. Malo’s been doing a fine job running MIRI day-to-day for many many years (including feats like acquiring a rural residence for all staff who wanted to avoid cities during COVID, and getting that venue running smoothly). In recent years, morale has been low and I, at least, haven’t seen many hopeful paths before us.” (Bold emphases are mine). Do you see the first bold sentence as being in conflict with the second, at all? If morale is low, why do you see that as an indicator that the status quo should remain in place?
A few things seem relevant here when it comes to morale:
I think, on average, folks at MIRI are pretty pessimistic about humanity’s chances of avoiding AI x-risk, and overall I think the situation has felt increasingly dire over the past few years to most of us.
Nate and Eliezer lost hope in the research directions they were most optimistic about, and haven’t found any new angles of attack in the research space that they have much hope in.
Nate and Eliezer very much wear their despair on their sleeve so to speak, and I think it’s been rough for an org like MIRI to have that much sleeve-despair coming from both its chief executive and founder.
During my time as COO over the last ~7 years, I’ve increasingly taken on more and more of the responsibility traditionally associated at most orgs with the senior leadership position. So when Nate says “This change is in large part an enshrinement of the status quo. Malo’s been doing a fine job running MIRI day-to-day for many many years […]” (emphasis mine), this is what he’s pointing at. However, he was definitely still the one in charge, and therefore had a significant impact on the org’s internal culture, narrative, etc.
While he has many strengths, I think I’m stronger in (and better suited to) some management and people leadership stuff. As such, I’m hopeful that in the senior leadership position (where I’ll be much more directly responsible for steering our culture etc.), I’ll be able to “rally the troops” so to speak in a way that Nate didn’t have as much success with, especially in these dire times.
Unfortunately, I do not have a long response prepared to answer this (and perhaps it would be somewhat inappropriate, at this time), however I wanted to express the following:
“Wearing your [feelings] on your sleeve” is an English idiom meaning openly showing your emotions.
It is quite distinct from the idea of belief as attire from Eliezer’s sequence post, in which he was suggesting that some people “wear” their (improper) beliefs to signal what team they are on.
Nate and Eliezer openly show their despair about humanity’s odds in the face of AI x-risk, not as a way of signaling what team they’re on, but because despair reflects their true beliefs.
I wonder about how much I want to keep pressing on this, but given that MIRI is refocusing towards comms strategy, I feel like you “can take it.”
The Sequences don’t make a strong case, that I’m aware of, that despair and hopelessness are very helpful emotions that drive motivations or our rational thoughts processes in the right direction, nor do they suggest that displaying things like that openly is good for organizational quality. Please correct me if I’m wrong about that. (However they… might. I’m working on why this position may have been influenced to some degree by the Sequences right now. That said, this is being done as a critical take.)
If despair needed to be expressed openly in order to actually make progress towards a goal, then we would call “bad morale” “good morale” and vice-versa.
I don’t think this is very controversial, so it makes sense to ask why MIRI thinks they have special, unusual insight into why this strategy works so much better than the default “good morale is better for organizations.”
I predict that ultimately the only response you could make—which you have already—is that despair is the most accurate reflection of the true state of affairs.
If we thought that emotionality was one-to-one with scientific facts, then perhaps.
Given that there actually currently exists a “Team Optimism,” so to speak, that directly appeared as an opposition party to what it perceives as a “Team Despair”, I don’t think we can dismiss the possibility of “beliefs as attire” quite yet.
so to speak, that directly appeared as an opposition party to what it perceives as a “Team Despair”,
I think this gets it backward. There were lots of optimisitc people that kept not understanding or integrating the arguments that you should be less optimistic, and people kept kinda sliding off really thinking about it, and finally some people were like “okay, time to just actually be extremely blunt/clear until they get it.”
(Seems plausible that then a polarity formed that made some people really react against the feeling of despair, but I think that was phase two)
2. Why do you see communications as being as decoupled (rather, either that it is inherently or that it should be) from research as you currently do?
The things we need to communicate about right now are nowhere near the research frontier.
One common question we get from reporters, for example, is “why can’t we just unplug a dangerous AI?” The answer to this is not particularly deep and does not require a researcher or even a research background to engage on.
We’ve developed a list of the couple-dozen most common questions we are asked by the press and the general public and they’re mostly roughly on par with that one.
There is a separate issue of doing better at communicating about our research; MIRI has historically not done very well there. Part of it is that we were/are keeping our work secret on purpose, and part of it is that communicating is hard. To whatever extent it’s just about ‘communicating is hard,’ I would like to do better at the technical comms, but it is not my current highest priority.
Quickly chiming in to add that I can imagine there might be some research we could do that could be more instrumentally useful to comms/policy objectives. Unclear whether it makes sense for us to do anything like that, but it’s something I’m tracking.
I suppose I have two questions which naturally come to mind here:
Given Nate’s comment: “This change is in large part an enshrinement of the status quo. Malo’s been doing a fine job running MIRI day-to-day for many many years (including feats like acquiring a rural residence for all staff who wanted to avoid cities during COVID, and getting that venue running smoothly). In recent years, morale has been low and I, at least, haven’t seen many hopeful paths before us.” (Bold emphases are mine). Do you see the first bold sentence as being in conflict with the second, at all? If morale is low, why do you see that as an indicator that the status quo should remain in place?
Why do you see communications as being as decoupled (rather, either that it is inherently or that it should be) from research as you currently do?
A few things seem relevant here when it comes to morale:
I think, on average, folks at MIRI are pretty pessimistic about humanity’s chances of avoiding AI x-risk, and overall I think the situation has felt increasingly dire over the past few years to most of us.
Nate and Eliezer lost hope in the research directions they were most optimistic about, and haven’t found any new angles of attack in the research space that they have much hope in.
Nate and Eliezer very much wear their despair on their sleeve so to speak, and I think it’s been rough for an org like MIRI to have that much sleeve-despair coming from both its chief executive and founder.
During my time as COO over the last ~7 years, I’ve increasingly taken on more and more of the responsibility traditionally associated at most orgs with the senior leadership position. So when Nate says “This change is in large part an enshrinement of the status quo. Malo’s been doing a fine job running MIRI day-to-day for many many years […]” (emphasis mine), this is what he’s pointing at. However, he was definitely still the one in charge, and therefore had a significant impact on the org’s internal culture, narrative, etc.
While he has many strengths, I think I’m stronger in (and better suited to) some management and people leadership stuff. As such, I’m hopeful that in the senior leadership position (where I’ll be much more directly responsible for steering our culture etc.), I’ll be able to “rally the troops” so to speak in a way that Nate didn’t have as much success with, especially in these dire times.
Unfortunately, I do not have a long response prepared to answer this (and perhaps it would be somewhat inappropriate, at this time), however I wanted to express the following:
They wear their despair on their sleeves? I am admittedly somewhat surprised by this.
“Wearing your [feelings] on your sleeve” is an English idiom meaning openly showing your emotions.
It is quite distinct from the idea of belief as attire from Eliezer’s sequence post, in which he was suggesting that some people “wear” their (improper) beliefs to signal what team they are on.
Nate and Eliezer openly show their despair about humanity’s odds in the face of AI x-risk, not as a way of signaling what team they’re on, but because despair reflects their true beliefs.
I wonder about how much I want to keep pressing on this, but given that MIRI is refocusing towards comms strategy, I feel like you “can take it.”
The Sequences don’t make a strong case, that I’m aware of, that despair and hopelessness are very helpful emotions that drive motivations or our rational thoughts processes in the right direction, nor do they suggest that displaying things like that openly is good for organizational quality. Please correct me if I’m wrong about that. (However they… might. I’m working on why this position may have been influenced to some degree by the Sequences right now. That said, this is being done as a critical take.)
If despair needed to be expressed openly in order to actually make progress towards a goal, then we would call “bad morale” “good morale” and vice-versa.
I don’t think this is very controversial, so it makes sense to ask why MIRI thinks they have special, unusual insight into why this strategy works so much better than the default “good morale is better for organizations.”
I predict that ultimately the only response you could make—which you have already—is that despair is the most accurate reflection of the true state of affairs.
If we thought that emotionality was one-to-one with scientific facts, then perhaps.
Given that there actually currently exists a “Team Optimism,” so to speak, that directly appeared as an opposition party to what it perceives as a “Team Despair”, I don’t think we can dismiss the possibility of “beliefs as attire” quite yet.
I think this gets it backward. There were lots of optimisitc people that kept not understanding or integrating the arguments that you should be less optimistic, and people kept kinda sliding off really thinking about it, and finally some people were like “okay, time to just actually be extremely blunt/clear until they get it.”
(Seems plausible that then a polarity formed that made some people really react against the feeling of despair, but I think that was phase two)
The things we need to communicate about right now are nowhere near the research frontier.
One common question we get from reporters, for example, is “why can’t we just unplug a dangerous AI?” The answer to this is not particularly deep and does not require a researcher or even a research background to engage on.
We’ve developed a list of the couple-dozen most common questions we are asked by the press and the general public and they’re mostly roughly on par with that one.
There is a separate issue of doing better at communicating about our research; MIRI has historically not done very well there. Part of it is that we were/are keeping our work secret on purpose, and part of it is that communicating is hard. To whatever extent it’s just about ‘communicating is hard,’ I would like to do better at the technical comms, but it is not my current highest priority.
Quickly chiming in to add that I can imagine there might be some research we could do that could be more instrumentally useful to comms/policy objectives. Unclear whether it makes sense for us to do anything like that, but it’s something I’m tracking.