CEO at Machine Intelligence Research Institute (MIRI)
Malo(Malo Bourgon)
Apple Music?
I’d certainly be interested in hearing about them, though it currently seems pretty unlikely to me that it would make sense for MIRI to pivot to working on such things directly as opposed to encouraging others to do so (to the extent they agree with Nate/EYs view here).
I think this a great comment, and FWIW I agree with, or am at least sympathetic to, most of it.
If you are on an airplane or a train, and you can suddenly work or watch on a real theater screen, that would be a big game. Travel enough and it is well worth paying for that, or it could even enable more travel.
Ben Thompson agrees in a followup (paywalled):
Vision Pro on an Airplane
I tweeted about this, but I think it’s worth including in the Update as a follow-up to last week’s review of the Vision Pro: I used the Vision Pro on an airplane over the weekend, sitting in economy, and it was absolutely incredible. I called it “life-changing” on Twitter, and I don’t think I was being hyperbolic, at least for this specific scenario:
The movie watching experience was utterly immersive. When you go into the Apple TV+ or Disney+ theaters, with noise-canceling turned on, you really are transported to a different place entirely.
The Mac projection experience was an even bigger deal: my 16″ MacBook Pro is basically unusable in economy, and a 14″ requires being all scrunched up with bad posture to see anything. In this case, though, I could have the lid actually folded towards me (if, say, the person in front of me reclined), while still having a big 4K screen to work on. The Wifi on this flight was particularly good, so I had a basketball game streaming to the side while I worked on the Mac; it was really extraordinary.
I mentioned the privacy of using a headset in my review, and that really came through clearly in this use case. It was really freeing to basically be “spread out” as far as my computing and entertainment went and to feel good about the fact I wasn’t bothering anyone else and that no one could see my screen.
There is no sign that anyone plans to actually offer MLB or other games in this mode.
MIRI 2024 Mission and Strategy Update
That may be right but then the claim is wrong. The true claim would be “RSPs seem like a robustly good compromise with people who are more optimistic than me”.
IDK man, this seems like nitpicking to me ¯\_(ツ)_/¯. Though I do agree that, on my read, it’s technically more accurate.
My sense here is that Holden is speaking from a place where he considers himself to be among the folks (like you and I) who put significant probability on AI posing a catastrophic/existential risk in the next few years, and “people who have different views from mine” is referring to folks who aren’t in that set.
(Of course, I don’t actually know what Holden meant. This is just what seemed like the natural interpretation to me.)
And then the claim becomes not really relevant?
Why?
Responsible scaling policies (RSPs) seem like a robustly good compromise with people who have different views from mine
2. It seems like it’s empirically wrong based on the strong pushback RSPs received so that at least you shouldn’t call it “robustly”, unless you mean a kind of modified version that would accommodate the most important parts of the pushback.
FWIW, my read here was that “people who have different views from mine” was in reference to these sets of people:
Some people think that the kinds of risks I’m worried about are far off, farfetched or ridiculous.
Some people think such risks might be real and soon, but that we’ll make enough progress on security, alignment, etc. to handle the risks—and indeed, that further scaling is an important enabler of this progress (e.g., a lot of alignment research will work better with more advanced systems).
Some people think the risks are real and soon, but might be relatively small, and that it’s therefore more important to focus on things like the U.S. staying ahead of other countries on AI progress.
Our reserves increased substantially in 2021 due to a couple of large crypto donations.
At the moment we’ve got ~$20M.
FWIW, I approached Gretta about starting to help out with comms related stuff at MIRI, i.e., it wasn’t Eliezer’s idea.
Interesting, I don’t think I knew about this post until I clicked on the link in your comment.
Quickly chiming in to add that I can imagine there might be some research we could do that could be more instrumentally useful to comms/policy objectives. Unclear whether it makes sense for us to do anything like that, but it’s something I’m tracking.
Given Nate’s comment: “This change is in large part an enshrinement of the status quo. Malo’s been doing a fine job running MIRI day-to-day for many many years (including feats like acquiring a rural residence for all staff who wanted to avoid cities during COVID, and getting that venue running smoothly). In recent years, morale has been low and I, at least, haven’t seen many hopeful paths before us.” (Bold emphases are mine). Do you see the first bold sentence as being in conflict with the second, at all? If morale is low, why do you see that as an indicator that the status quo should remain in place?
A few things seem relevant here when it comes to morale:
I think, on average, folks at MIRI are pretty pessimistic about humanity’s chances of avoiding AI x-risk, and overall I think the situation has felt increasingly dire over the past few years to most of us.
Nate and Eliezer lost hope in the research directions they were most optimistic about, and haven’t found any new angles of attack in the research space that they have much hope in.
Nate and Eliezer very much wear their despair on their sleeve so to speak, and I think it’s been rough for an org like MIRI to have that much sleeve-despair coming from both its chief executive and founder.
During my time as COO over the last ~7 years, I’ve increasingly taken on more and more of the responsibility traditionally associated at most orgs with the senior leadership position. So when Nate says “This change is in large part an enshrinement of the status quo. Malo’s been doing a fine job running MIRI day-to-day for many many years […]” (emphasis mine), this is what he’s pointing at. However, he was definitely still the one in charge, and therefore had a significant impact on the org’s internal culture, narrative, etc.
While he has many strengths, I think I’m stronger in (and better suited to) some management and people leadership stuff. As such, I’m hopeful that in the senior leadership position (where I’ll be much more directly responsible for steering our culture etc.), I’ll be able to “rally the troops” so to speak in a way that Nate didn’t have as much success with, especially in these dire times.
Does MIRI also plan to get involved in policy discussions (e.g. communicating directly with policymakers, and/or advocating for specific policies)?
We are limited in our ability to directly influence policy by our 501(c)3 status; that said, we do have some latitude there and we are exercising it within the limits of the law. See for example this tweet by Eliezer.
To expand on this a bit, I and a couple others at MIRI have been spending some time syncing up and strategizing with other people and orgs who are more directly focused on policy work themselves. We’ve also spent some time chatting with folks in government that we already know and have good relationships with. I expect we’ll continue to do a decent amount of this going forward.
It’s much less clear to me that it makes sense for us to end up directly engaging in policy discussions with policymakers as an important focus of ours (compared to focusing on broad public comms), given that this is pretty far outside of our area of expertise. It’s definitely something I’m interested in exploring though, and chatting about with folks who have expertise in the space.
Does MIRI need any help? (Or perhaps more precisely “Does MIRI need any help from the right kind of person with the right kind of skills, and if so, what would that person or those skills look like?”)
Yes, I expect to be hiring in the comms department relatively soon but have not actually posted any job listings yet. I will post to LessWrong about it when I do.
That said, I’d be excited for folks who think they might have useful background or skills to contribute and would be excited to work at MIRI, to reach out and let us know they exist, or pitch us on why they might be a good addition to the team.
MIRI used to be focused on safety research, but now it’s mostly trying to stop the march towards superintelligence, by presenting the case for the extreme danger of the current trajectory.
Yeah, given the current state of the game board we think that work in the comms/policy space seems more impactful to us on the margin, so we’ll be focusing on that as our top priority and see how things develop, That won’t be our only focus though, we’ll definitely continue to host/fund research.
Sometimes quick org updates about team changes can be a little dry. ¯\(ツ)/¯
I expect you’ll find the next post more interesting :)
(Edit: fixed typo)
My read was that his comment was in response to this part at the end of the post:
There’s a lot more we hope to say about our new (and still evolving) strategy, and about our general thinking on the world’s (generally very dire) situation. But I don’t want those announcements to further delay sharing the above updates, so I’ve factored our 2023 strategy updates into multiple posts, beginning with this one.
Agree. I think Google DeepMind might actually be the most forthcoming about this kind of thing, e.g., see their Evaluating Frontier Models for Dangerous Capabilities report.