I’m not going to argue that you should pay attention to EY. His arguments convince me, but if they don’t convince you, I’m not gonna do any better.
What I’m trying to get at is, when you ask “is there any evidence that will result in EY ceasing to urgently ask for your money?”… I mean, I’m sure there is such evidence, but I don’t wish to speak for him. But it feels to me that by asking that question, you possibly also think of EY as the sort of person who says: “this is evidence that AI risk is near! And this is evidence that AI risk is near! Everything is evidence that AI risk is near!” And I’m pointing out that no, that’s not how he acts.
While we’re at it, this exchange between us seems relevant. (“Eliezer has said that security mindset is similar, but not identical, to the mindset needed for AI design.” “Well, what a relief!”) You seem surprised, and I’m not sure what about it was surprising to you, but I don’t think you should have been surprised.
Basically, even if you’re right that he’s wrong, I feel like you’re wrong about how he’s wrong. You seem to have a model of him which is very different from my model of him.
(Btw, his opinion seems to be that AlphaGo’s methods are what makes it more of a leap than a self-driving car or than Deep Blue, not the results. Not sure that affects your position.)
“this is evidence that AI risk is near! And this is evidence that AI risk is near! Everything is evidence that AI risk is near!” And I’m pointing out that no, that’s not how he acts.
In particular he apparently mentioned Go play as an indicator before (and assumed as many other people that it were somewhat more distant) and now follows up on this threshold. What else would you expect? That he don’t name a limited number of relevant events (I assume that the number is limited; I didn’t know of this specific one before)?
I think you misunderstood me (but that’s my fault for being opaque, cadence is hard to convey in text). I was being sarcastic. In other words, I don’t need EY’s opinion, I can just look at the problem myself (as you guys say “argument screens authority.”)
I feel like you’re wrong about how he’s wrong.
Look, I met EY and chatted with him. I don’t think EY is “evil,” exactly, in a way that L. Ron Hubbard was. I think he mostly believes his line (but humans are great at self-deception). I think he’s a flawed person, like everyone else. It’s just that he has an enormous influence on the rationalist community that immensely magnify the damage his normal human flaws and biases can do.
I always said that the way to repair human frailty issues is to treat rationality as a job (rather than a social club), and fellow rationalists as coworkers (rather than tribe members). I also think MIRI should stop hitting people up for money and get a normal funding stream going. You know, let their ideas of how to avoid UFAI compete in the normal marketplace of ideas.
I also think MIRI should stop hitting people up for money and get a normal funding stream going. You know, let their ideas of how to avoid UFAI compete in the normal marketplace of ideas.
Currently MIRI gets their funding by 1) donations 2) grants. Isn’t that exactly what the normal funding stream for non-profits is?
Sure. Scientology probably has non-profits, too. I am not saying MIRI is anything like Scientology, merely that it isn’t enough to just determine legal status and call it a day, we have to look at the type of thing the non-profit is.
MIRI is a research group. They call themselves an institute, but they aren’t, really. Institutes are large. They are working on some neat theory stuff (from what Benja/EY explained to me) somewhat outside the mainstream. Which is great! They have some grant funding, actually, last I checked. Which is also great!
They are probably not yet financially secure to stop asking for money, which is also ok.
I think all I am saying is, in my view the success condition is they “achieve orbit” and stop asking, because basically what they are working on is considered sufficiently useful research that they can operate like a regular research group. If they never stop asking I think that’s a bit weird, because either their direction isn’t perceived good and they can’t get enough funding bandwidth without donations, or they do have enough bandwidth but want more revenue anyways, which I personally would find super weird and unsavory.
They are probably not yet financially secure to stop asking for money, which is also ok.
Who is? Last I checked, Harvard was still asking alums for donations, which suggests to me that asking is driven by getting money more than it’s driven by needing money.
I think comparing Harvard to a research group is a type error, though. Research groups don’t typically do this. I am not going to defend Unis shaking alums down for money, especially given what they do with it.
I think comparing Harvard to a research group is a type error, though.
I know several research groups where the PI’s sole role is fundraising, despite them having much more funding than the average research group.
My point was more generic—it’s not obvious to me why you would expect groups to think “okay, we have enough resources, let’s stop trying to acquire more” instead of “okay, we have enough resources to take our ambitions to the next stage.” The American Cancer Society has about a billion dollar budget, and yet they aren’t saying “yeah, this is enough to deal with cancer, we don’t need your money.”
(It may be the case that a particular professor stops writing grant applications, because they’re limited by attention they can give to their graduate students. But it’s not like any of those professors will say “yeah, my field is big enough, we don’t need any more professor slots for my students to take.”)
In my experience, research groups exist inside universities or a few corporations like Google. The senior members are employed and paid for by the institution, and only the postgrads, postdocs, and equipment beyond basic infrastructure are funded by research grants. None of them fly “in orbit” by themselves but only as part of a larger entity. Where should an independent research group like MIRI seek permanent funding?
By “in orbit” I mean “funded by grants rather than charity.” If a group has a steady grant research stream, that means they are doing good enough work that funding agencies continue to give them money. This is the standard way to be self-sustaining for a research group.
I’m not going to argue that you should pay attention to EY. His arguments convince me, but if they don’t convince you, I’m not gonna do any better.
What I’m trying to get at is, when you ask “is there any evidence that will result in EY ceasing to urgently ask for your money?”… I mean, I’m sure there is such evidence, but I don’t wish to speak for him. But it feels to me that by asking that question, you possibly also think of EY as the sort of person who says: “this is evidence that AI risk is near! And this is evidence that AI risk is near! Everything is evidence that AI risk is near!” And I’m pointing out that no, that’s not how he acts.
While we’re at it, this exchange between us seems relevant. (“Eliezer has said that security mindset is similar, but not identical, to the mindset needed for AI design.” “Well, what a relief!”) You seem surprised, and I’m not sure what about it was surprising to you, but I don’t think you should have been surprised.
Basically, even if you’re right that he’s wrong, I feel like you’re wrong about how he’s wrong. You seem to have a model of him which is very different from my model of him.
(Btw, his opinion seems to be that AlphaGo’s methods are what makes it more of a leap than a self-driving car or than Deep Blue, not the results. Not sure that affects your position.)
In particular he apparently mentioned Go play as an indicator before (and assumed as many other people that it were somewhat more distant) and now follows up on this threshold. What else would you expect? That he don’t name a limited number of relevant events (I assume that the number is limited; I didn’t know of this specific one before)?
I think you misunderstood me (but that’s my fault for being opaque, cadence is hard to convey in text). I was being sarcastic. In other words, I don’t need EY’s opinion, I can just look at the problem myself (as you guys say “argument screens authority.”)
Look, I met EY and chatted with him. I don’t think EY is “evil,” exactly, in a way that L. Ron Hubbard was. I think he mostly believes his line (but humans are great at self-deception). I think he’s a flawed person, like everyone else. It’s just that he has an enormous influence on the rationalist community that immensely magnify the damage his normal human flaws and biases can do.
I always said that the way to repair human frailty issues is to treat rationality as a job (rather than a social club), and fellow rationalists as coworkers (rather than tribe members). I also think MIRI should stop hitting people up for money and get a normal funding stream going. You know, let their ideas of how to avoid UFAI compete in the normal marketplace of ideas.
Currently MIRI gets their funding by 1) donations 2) grants. Isn’t that exactly what the normal funding stream for non-profits is?
Sure. Scientology probably has non-profits, too. I am not saying MIRI is anything like Scientology, merely that it isn’t enough to just determine legal status and call it a day, we have to look at the type of thing the non-profit is.
MIRI is a research group. They call themselves an institute, but they aren’t, really. Institutes are large. They are working on some neat theory stuff (from what Benja/EY explained to me) somewhat outside the mainstream. Which is great! They have some grant funding, actually, last I checked. Which is also great!
They are probably not yet financially secure to stop asking for money, which is also ok.
I think all I am saying is, in my view the success condition is they “achieve orbit” and stop asking, because basically what they are working on is considered sufficiently useful research that they can operate like a regular research group. If they never stop asking I think that’s a bit weird, because either their direction isn’t perceived good and they can’t get enough funding bandwidth without donations, or they do have enough bandwidth but want more revenue anyways, which I personally would find super weird and unsavory.
Who is? Last I checked, Harvard was still asking alums for donations, which suggests to me that asking is driven by getting money more than it’s driven by needing money.
I think comparing Harvard to a research group is a type error, though. Research groups don’t typically do this. I am not going to defend Unis shaking alums down for money, especially given what they do with it.
I know several research groups where the PI’s sole role is fundraising, despite them having much more funding than the average research group.
My point was more generic—it’s not obvious to me why you would expect groups to think “okay, we have enough resources, let’s stop trying to acquire more” instead of “okay, we have enough resources to take our ambitions to the next stage.” The American Cancer Society has about a billion dollar budget, and yet they aren’t saying “yeah, this is enough to deal with cancer, we don’t need your money.”
(It may be the case that a particular professor stops writing grant applications, because they’re limited by attention they can give to their graduate students. But it’s not like any of those professors will say “yeah, my field is big enough, we don’t need any more professor slots for my students to take.”)
In my experience, research groups exist inside universities or a few corporations like Google. The senior members are employed and paid for by the institution, and only the postgrads, postdocs, and equipment beyond basic infrastructure are funded by research grants. None of them fly “in orbit” by themselves but only as part of a larger entity. Where should an independent research group like MIRI seek permanent funding?
By “in orbit” I mean “funded by grants rather than charity.” If a group has a steady grant research stream, that means they are doing good enough work that funding agencies continue to give them money. This is the standard way to be self-sustaining for a research group.