Eliezer has expressed that ultimately, the goal of MIRI is not just research how to make FAI, but to be the one’s to make it.
In many ways it’s a race. While the pubic is squabbling, someone is going to build the first recursively self-improving system. We’re trying to maneuver the situation so that the people that do it first are the people who know what they’re doing.
Eliezer has expressed that ultimately, the goal of MIRI is not just research how to make FAI, but to be the one’s to make it.
Hmm..I wasn’t aware of that. Is there any source for that statement? Is MIRI actually doing any general AI research? I don’t think that you can easily jump from one specific field of AI research (ethics) to general AI research&design.
9. Is your pursuit of a theory of FAI similar to, say, Hutter’s AIXI, which is intractable in practice but offers an interesting intuition pump for the implementers of AGI systems? Or do you intend on arriving at the actual blueprints for constructing such systems? I’m still not 100% certain of your goals at SIAI.
Definitely actual blueprint, but, on the way to an actual blueprint, you probably have to, as an intermediate step, construct intractable theories that tell you what you’re trying to do, and enable you to understand what’s going on when you’re trying to do something. If you want a precise, practical AI, you don’t get there by starting with an imprecise, impractical AI and going to a precise, practical AI. You start with a precise, impractical AI and go to a precise, practical AI. I probably should write that down somewhere else because it’s extremely important, and as(?) various people who will try to dispute it, and at the same time hopefully ought to be fairly obvious if you’re not motivated to arrive at a particular answer there. You don’t just run out and construct something imprecise because, yeah, sure, you’ll get some experimental observations out of that, but what are your experimental observations telling you? And one might say along the lines of ‘well, I won’t know that until I see it,’ and suppose that has been known to happen a certain number of times in history; just inventing the math has also happened a certain number of times in history.
We already have a very large body of experimental observations of various forms of imprecise AIs, both the domain specific types we have now, and the sort of imprecise AI constituted by human beings, and we already have a large body of experimental data, and eyeballing it… well, I’m not going to say it doesn’t help, but on the other hand, we already have this data and now there is this sort of math step in which we understand what exactly is going on; and then the further step of translating the math back into reality. It is the goal of the Singularity Institute to build a Friendly AI. That’s how the world gets saved, someone has to do it. A lot of people tend to think that this is going to require, like, a country’s worth of computing power or something like that, but that’s because the problem seems very difficult because they don’t understand it, so they imagine throwing something at it that seems very large and powerful and gives this big impression of force, which might be a country-size computing grid, or it might be a Manhattan Project where some computer scientists… but size matters not, as Yoda says.
What matters is understanding, and if the understanding is widespread enough, then someone is going to grab the understanding and use it to throw together the much simpler AI that does destroy the world, the one that’s build to much lower standards, so the model of ‘yes, you need the understanding, the understanding has to be concentrated within a group of people small enough that there is not one defector in the group who goes off and destroys the world, and then those people have to build an AI.’ If you condition on that the world got saved, and look back and within history, I expect that that is what happened in the majority of cases where a world anything like this one gets saved, and working back from there, they will have needed a precise theory, because otherwise they’re doomed. You can make mistakes and pull yourself up, even if you think you have a precise theory, but if you don’t have a precise theory then you’re completely doomed, or if you don’t think you have a precise theory then you’re completely doomed.
Also,
Aside from that, though, I think that saving the human species eventually comes down to, metaphorically speaking, nine people and a brain in a box in a basement, and everything else feeds into that. Publishing papers in academia feeds into either attracting attention that gets funding, or attracting people who read about the topic, not necessarily reading the papers directly even but just sort of raising the profile of the issues where intelligent people wonder what they can do with their lives think artificial intelligence...
I get the sense that Eliezer wants to be one of the nine people in that basement, if he can be, but I might be streching the evidence little to say “Eliezer has expressed that ultimately, the goal of MIRI is not just research how to make FAI, but to be the one’s to make it.”
Thanks! Haven’t seen that before. I still think it would be better to specialize on ethics issue and than apply its result on AGI sytsem developed by other (hopefully friendly) party. But It would be awesome if someone who is genuinely ethical develops AGI first. I’m really hoping that some big org which went furthest in AI research like google decides to cooperate with MIRI on that issue when they reach the critical point in AGI buildup.
This is something that I think is neglected (in part because it’s not the relevant problem yet) in thinking about friendly AI. Even if we had solved all of the problems of stable goal systems, there could still be trouble, depending on who’s goals are implemented. If it’s a fast take-off, whoever cracks recursive self-improvement first, basically gets Godlike powers (in the form a genii that reshapes the world according to your wish). They define the whole future of the expanding visible universe. There are a lot of institutions who I do not trust to have the foresight to think “We can create utopia beyond anyone’s wildest dreams” and instead to default to “We’ll skewer the competition in the next quarter.”
However, there are unsubstantiated rumors that Google has taken some ex-MIRI people for work on a project of some kind.
Eliezer has expressed that ultimately, the goal of MIRI is not just research how to make FAI, but to be the one’s to make it.
In many ways it’s a race. While the pubic is squabbling, someone is going to build the first recursively self-improving system. We’re trying to maneuver the situation so that the people that do it first are the people who know what they’re doing.
Hmm..I wasn’t aware of that. Is there any source for that statement? Is MIRI actually doing any general AI research? I don’t think that you can easily jump from one specific field of AI research (ethics) to general AI research&design.
From here.
Also,
I get the sense that Eliezer wants to be one of the nine people in that basement, if he can be, but I might be streching the evidence little to say “Eliezer has expressed that ultimately, the goal of MIRI is not just research how to make FAI, but to be the one’s to make it.”
Thanks! Haven’t seen that before. I still think it would be better to specialize on ethics issue and than apply its result on AGI sytsem developed by other (hopefully friendly) party. But It would be awesome if someone who is genuinely ethical develops AGI first. I’m really hoping that some big org which went furthest in AI research like google decides to cooperate with MIRI on that issue when they reach the critical point in AGI buildup.
This is something that I think is neglected (in part because it’s not the relevant problem yet) in thinking about friendly AI. Even if we had solved all of the problems of stable goal systems, there could still be trouble, depending on who’s goals are implemented. If it’s a fast take-off, whoever cracks recursive self-improvement first, basically gets Godlike powers (in the form a genii that reshapes the world according to your wish). They define the whole future of the expanding visible universe. There are a lot of institutions who I do not trust to have the foresight to think “We can create utopia beyond anyone’s wildest dreams” and instead to default to “We’ll skewer the competition in the next quarter.”
However, there are unsubstantiated rumors that Google has taken some ex-MIRI people for work on a project of some kind.