AFAIK, that was the stuff Goertzel was doing as Director of Research. Now that he isn’t around anymore, those things were dropped.
Pretty much the whole “practical experimentation” angle is, again AFAIK, considered too unsafe by the people currently running things at SIAI. At least that’s what I was told during my Visiting Fellow time.
I expect improving on state of the art in practical AI is also almost totally useless for figuring out a way towards FAI, so “unsafe” is almost beside the point (except that making things worse while not making them better is not a good plan).
How do you expect to prove anything about an FAI without even knowing what an AGI would look like? I don’t think current AI researchers even have that great of an idea of what AGI will eventually look like...
Now improving on state of the art might not be helpful but being in a position where you could improve on state of the art would be; and the best way to make sure you are in such a position is to have actually done it at least once.
How do you expect to prove anything about an FAI without even knowing what an AGI would look like? I don’t think current AI researchers even have that great of an idea of what AGI will eventually look like...
It will be (and look) the way we make it. And we should make it right, which requires first figuring out what that is.
An AGI is an extremely complex entity. You don’t get to decide arbitrarily how to make it. If nothing else, there are fundamental computational limits on Bayesian inference that are not even well-understood yet. So if you were planning to make your FAI a Bayesian then you should probably at least be somewhat familiar with these issues, and of course working towards their resolution will help you better understand your constraints. I personally strongly suspect there are also fundamental computational limits on utility maximization, so if you were planning on making your FAI a utility maximizer then again this is probably a good thing to study. Maybe you don’t consider this AGI research but the main approach to AGI that I consider feasible would benefit at least somewhat from such understanding.
In my opinion, provably friendly AI is hopeless to get to before someone else gets to AGI. The best thing one can hope for is (i) brain uploads come first, or (ii) a fairly transparent AGI design coupled with a good understanding of meta-ethics. This means that as far as I can see, if you want to reduce x-risk from UFAI then you should be doing one of the following:
working towards brain uploads to make sure they come first
working on the statistical approach to AI to make sure it gets to AGI before the connectionist approach (and developing software tools to help us better understand the statistical algorithms we write)
working on something like lukeprog’s program of metaethics (this is probably the best of the three)
Do you know where the “we have to have to work towards AGI before we can make progress on FAI” meme came from? (I’m not sure if that’s a caricature of the position or what.)
It’s an exaggaration in that form, but a milder version seems pretty obvious to me. If you want to design a safe airplane, you need to know something about how to make a working airplane in the first place.
While there are certainly theoretical parts of FAI theory that you can make progress on even without knowing anything about AGI, there’s probably a limit to how far you can get that way. For your speculations to be useful, you’ll sooner or later need to know something about the design constraints. And they’re not only constraints—they’ll give you entirely new ideas and directions you wouldn’t have considered otherwise.
It sounds nonsensical to claim that you could design safe airplanes without knowing anything about airplanes, that you could be a computer security expert without knowing anything about how software works, or that you could design a safe building without knowing anything about architecture. Why would it make any more sense to claim that you could design FAI without knowing AGI?
In this analogy, the relevant concern maps for me to the notion of “safety” of airplanes. And we know what “safely” for airplanes is. It means people don’t die. It’s hard to make a proper analogy, since for all usual technology the moral questions are easy, and you are left with technical questions. But with FAI, we also need to do something about moral questions, on an entirely new level.
I agree that solving FAI also involves solving non-technical, moral questions, and that considerable headway can probably be made on these without knowledge about AGI. I was only saying that there’s a limit on how far you can get that way.
How far or near that limit is, I don’t know. But I would think that there’d be something useful to be found from pure AGI earlier than one might naively expect. E.g. the Sequences draw on plenty of math/compsci related material, and I expect that likewise some applications/techniques from AGI will also be necessary for FAI.
We know tons about AGI already, why do people presume we need to learn more? (It’s totally obvious how to make safe airplanes before you have a Wright flyer. Use lots of cushions, use two engines, make everything redundant actually, make everything of incredibly high quality material, et cetera. I think your analogy is way too leaky.)
That is not in fact how to design safe airplanes. Airplanes are safe mainly because they don’t crash. Making everything redundant in an effective way is quite nontrivial and probably depends on the specific design of the aircraft (if you disagree then provide a (rough) algorithm for how to make everything redundant in a way that actually achieves the safety levels of modern aircraft). More importantly, redundancy is not particularly helpful if we don’t already know that the plane is guaranteed to fly properly in a wide range of operating conditions that are precisely enough quantified that we can determine in advance whether to ground a plane because e.g. the weather is too extreme.
I think you cannot figure out airplane safety before you build airplanes. For example, you’d have to be almost superhumanly smart to predict and beat flutter without experimenting. Or spin, as Kaj_Sotala pointed out. Or spatial disorientation. Even without going into details, can you complete the list of problems by yourself?
Nobody knowing how to build AGI hardly counts as “knowing tons already”.
Some of the stuff about how to make safe airplanes is obvious. Stuff like how to design the plane so as to minimize the risk of spin isn’t. If you disagree, please tell me how to build a maximally stall-safe airplane without looking at the Wikipedia link or otherwise looking it up.
Humanity basically already knows how to build AGI. That doesn’t really matter, though. FAI and AGI are incredibly different problems. They share basically none of the same architectural features. I think that understanding what an AGI is or what it would do is incredibly important for FAI; is that what you mean? But knowing how to build AGI doesn’t do that for you; you still have to actually turn it on.
Agree that my parenthetical was straight-up wrong, but it was indeed a parenthetical. I still think the analogy is too loose to be useful.
I’d like evidence of the claim that humanity, or some subset thereof, “basically already knows how to build AGI”. I would stipulate that we know how to do whole-brain emulation, giving adequate computational resources; is that what you had in mind?
Can I be aware of something that I think might be false? Anyway, I am aware that people do not seem to understand that a small amount of information when requested is significantly better than no information, and I am aware that there are no affordances that would allow people to take the 3 seconds of reflection that would be necessary to realize this after realizing that their implicit expectations about how much information they deserve to receive from near-anonymous people are probably not based on any sort of explicitly justified or explicitly reflected-upon analysis. Because there’s never an affordance for basic sanity, especially not when you’re busy climbing over the dead bodies towards the top slot of a huge negative sum signalling game and you’re not even sure why you’re playing and you don’t really care to find out. Ya know, what I love about memes is that even if Buddhas never have kids it doesn’t matter much, any mind can become a vessel for perfection via reflection on clarification of perception. The genes can abstain from burning the cosmic commons with viral waste—how much of karma is the timeless-analogous-controlling-decision to engage in the class of actions that prominently includes negative sum signalling games? -- while the memes spread farther and farther even without so desiring, simply by the timeless beauty of their nature. Right makes might makes right, as they say.
When someone tells you a normally hidden consequence of what you’re doing, you don’t tell them off.
The norm is to give a full answer. If you don’t, say you will elaborate later, give a summary, or say it’s too long to explain. Don’t unilaterally defect.
If you defect, you’re telling people your time is more precious than theirs. This is rude.
In particular, you bother to post on LW in the first place. “Near-strangers deserve no information from me” is unfair.
We want to promote pointing out specific problems. A rant against memetics and signalling games in general is a bad answer to specific criticism.
that a small amount of information when requested is significantly better than no information
That’s assuming that the information is correct. It could also be wrong or misleading, in which case it would be better not to receive it. While “Do you mean whole brain emulation?” “No” doesn’t fall into this category, claims like “we know how to build AGI” are definitely claims that could be wrong, and are indeed generally considered to be wrong.
Unless you provide a reasonable argument or reason for why we should believe such a claim, anyone maintaining any epistemic hygiene standards (or common sense, for that matter) will be forced to ignore it. Therefore such comments only serve as a distraction, providing no useful value but taking up space and attention.
If I were to comment on conversations and tell people that 2 + 2 = 5 and then refuse to provide any justification when asked, people would quite reasonably conclude that I was a troll, too.
Anyway, I am aware that people do not seem to understand that a small amount of information when requested is significantly better than no information, and I am aware that there are no affordances that would allow people to take the 3 seconds of reflection that would be necessary to realize this after realizing that their implicit expectations about how much information they deserve to receive from near-anonymous people are probably not based on any sort of explicitly justified or explicitly reflected-upon analysis.
When I notice myself writing a sentence like that, I drop it and do something else. Maybe I would come back to it later, but not less than 24 hours later.
I try to as well. Unfortunately I was distraught squared at the time and wasn’t thinking at all clearly as far as social reasoning goes. My apologies to everyone for failing to keep my comments sufficiently constructive.
AFAIK, that was the stuff Goertzel was doing as Director of Research. Now that he isn’t around anymore, those things were dropped.
Pretty much the whole “practical experimentation” angle is, again AFAIK, considered too unsafe by the people currently running things at SIAI. At least that’s what I was told during my Visiting Fellow time.
I expect improving on state of the art in practical AI is also almost totally useless for figuring out a way towards FAI, so “unsafe” is almost beside the point (except that making things worse while not making them better is not a good plan).
How do you expect to prove anything about an FAI without even knowing what an AGI would look like? I don’t think current AI researchers even have that great of an idea of what AGI will eventually look like...
Now improving on state of the art might not be helpful but being in a position where you could improve on state of the art would be; and the best way to make sure you are in such a position is to have actually done it at least once.
It will be (and look) the way we make it. And we should make it right, which requires first figuring out what that is.
An AGI is an extremely complex entity. You don’t get to decide arbitrarily how to make it. If nothing else, there are fundamental computational limits on Bayesian inference that are not even well-understood yet. So if you were planning to make your FAI a Bayesian then you should probably at least be somewhat familiar with these issues, and of course working towards their resolution will help you better understand your constraints. I personally strongly suspect there are also fundamental computational limits on utility maximization, so if you were planning on making your FAI a utility maximizer then again this is probably a good thing to study. Maybe you don’t consider this AGI research but the main approach to AGI that I consider feasible would benefit at least somewhat from such understanding.
In my opinion, provably friendly AI is hopeless to get to before someone else gets to AGI. The best thing one can hope for is (i) brain uploads come first, or (ii) a fairly transparent AGI design coupled with a good understanding of meta-ethics. This means that as far as I can see, if you want to reduce x-risk from UFAI then you should be doing one of the following:
working towards brain uploads to make sure they come first
working on the statistical approach to AI to make sure it gets to AGI before the connectionist approach (and developing software tools to help us better understand the statistical algorithms we write)
working on something like lukeprog’s program of metaethics (this is probably the best of the three)
Do you know where the “we have to have to work towards AGI before we can make progress on FAI” meme came from? (I’m not sure if that’s a caricature of the position or what.)
It’s an exaggaration in that form, but a milder version seems pretty obvious to me. If you want to design a safe airplane, you need to know something about how to make a working airplane in the first place.
While there are certainly theoretical parts of FAI theory that you can make progress on even without knowing anything about AGI, there’s probably a limit to how far you can get that way. For your speculations to be useful, you’ll sooner or later need to know something about the design constraints. And they’re not only constraints—they’ll give you entirely new ideas and directions you wouldn’t have considered otherwise.
It sounds nonsensical to claim that you could design safe airplanes without knowing anything about airplanes, that you could be a computer security expert without knowing anything about how software works, or that you could design a safe building without knowing anything about architecture. Why would it make any more sense to claim that you could design FAI without knowing AGI?
In this analogy, the relevant concern maps for me to the notion of “safety” of airplanes. And we know what “safely” for airplanes is. It means people don’t die. It’s hard to make a proper analogy, since for all usual technology the moral questions are easy, and you are left with technical questions. But with FAI, we also need to do something about moral questions, on an entirely new level.
I agree that solving FAI also involves solving non-technical, moral questions, and that considerable headway can probably be made on these without knowledge about AGI. I was only saying that there’s a limit on how far you can get that way.
How far or near that limit is, I don’t know. But I would think that there’d be something useful to be found from pure AGI earlier than one might naively expect. E.g. the Sequences draw on plenty of math/compsci related material, and I expect that likewise some applications/techniques from AGI will also be necessary for FAI.
We know tons about AGI already, why do people presume we need to learn more? (It’s totally obvious how to make safe airplanes before you have a Wright flyer. Use lots of cushions, use two engines, make everything redundant actually, make everything of incredibly high quality material, et cetera. I think your analogy is way too leaky.)
That is not in fact how to design safe airplanes. Airplanes are safe mainly because they don’t crash. Making everything redundant in an effective way is quite nontrivial and probably depends on the specific design of the aircraft (if you disagree then provide a (rough) algorithm for how to make everything redundant in a way that actually achieves the safety levels of modern aircraft). More importantly, redundancy is not particularly helpful if we don’t already know that the plane is guaranteed to fly properly in a wide range of operating conditions that are precisely enough quantified that we can determine in advance whether to ground a plane because e.g. the weather is too extreme.
I think you cannot figure out airplane safety before you build airplanes. For example, you’d have to be almost superhumanly smart to predict and beat flutter without experimenting. Or spin, as Kaj_Sotala pointed out. Or spatial disorientation. Even without going into details, can you complete the list of problems by yourself?
Nobody knowing how to build AGI hardly counts as “knowing tons already”.
Some of the stuff about how to make safe airplanes is obvious. Stuff like how to design the plane so as to minimize the risk of spin isn’t. If you disagree, please tell me how to build a maximally stall-safe airplane without looking at the Wikipedia link or otherwise looking it up.
Humanity basically already knows how to build AGI. That doesn’t really matter, though. FAI and AGI are incredibly different problems. They share basically none of the same architectural features. I think that understanding what an AGI is or what it would do is incredibly important for FAI; is that what you mean? But knowing how to build AGI doesn’t do that for you; you still have to actually turn it on.
Agree that my parenthetical was straight-up wrong, but it was indeed a parenthetical. I still think the analogy is too loose to be useful.
I’d like evidence of the claim that humanity, or some subset thereof, “basically already knows how to build AGI”. I would stipulate that we know how to do whole-brain emulation, giving adequate computational resources; is that what you had in mind?
No.
You are aware that exchanges such as this one will make most people think you’re a troll, right?
Can I be aware of something that I think might be false? Anyway, I am aware that people do not seem to understand that a small amount of information when requested is significantly better than no information, and I am aware that there are no affordances that would allow people to take the 3 seconds of reflection that would be necessary to realize this after realizing that their implicit expectations about how much information they deserve to receive from near-anonymous people are probably not based on any sort of explicitly justified or explicitly reflected-upon analysis. Because there’s never an affordance for basic sanity, especially not when you’re busy climbing over the dead bodies towards the top slot of a huge negative sum signalling game and you’re not even sure why you’re playing and you don’t really care to find out. Ya know, what I love about memes is that even if Buddhas never have kids it doesn’t matter much, any mind can become a vessel for perfection via reflection on clarification of perception. The genes can abstain from burning the cosmic commons with viral waste—how much of karma is the timeless-analogous-controlling-decision to engage in the class of actions that prominently includes negative sum signalling games? -- while the memes spread farther and farther even without so desiring, simply by the timeless beauty of their nature. Right makes might makes right, as they say.
When someone tells you a normally hidden consequence of what you’re doing, you don’t tell them off.
The norm is to give a full answer. If you don’t, say you will elaborate later, give a summary, or say it’s too long to explain. Don’t unilaterally defect.
If you defect, you’re telling people your time is more precious than theirs. This is rude.
In particular, you bother to post on LW in the first place. “Near-strangers deserve no information from me” is unfair.
We want to promote pointing out specific problems. A rant against memetics and signalling games in general is a bad answer to specific criticism.
That’s assuming that the information is correct. It could also be wrong or misleading, in which case it would be better not to receive it. While “Do you mean whole brain emulation?” “No” doesn’t fall into this category, claims like “we know how to build AGI” are definitely claims that could be wrong, and are indeed generally considered to be wrong.
Unless you provide a reasonable argument or reason for why we should believe such a claim, anyone maintaining any epistemic hygiene standards (or common sense, for that matter) will be forced to ignore it. Therefore such comments only serve as a distraction, providing no useful value but taking up space and attention.
If I were to comment on conversations and tell people that 2 + 2 = 5 and then refuse to provide any justification when asked, people would quite reasonably conclude that I was a troll, too.
When I notice myself writing a sentence like that, I drop it and do something else. Maybe I would come back to it later, but not less than 24 hours later.
I try to as well. Unfortunately I was distraught squared at the time and wasn’t thinking at all clearly as far as social reasoning goes. My apologies to everyone for failing to keep my comments sufficiently constructive.
Tested retract+edit again.
-
Could you expand?