It could be that it’s just impossible to build a safe FAI under the utilitarian framework and all AGI’s are UFAIs.
Otherwise the LessWrong memeplex has the advantage of being very diverse. When it comes to a subject like politics we do have people with mainstream views but we also have people who think that democracy is wrong. Having such a diversity of ideas makes it difficult for all of LessWrong to be wrong.
Some people paint a picture of LessWrong as a crowd of people who believe that everyone should do cryonics. In reality most the the participants aren’t signed up for cryonics.
Take a figure like Nassim Taleb. He’s frequently quoted on LessWrong so he’s not really outside the LessWrong memeplex. But he’s also a Christian.
There are a lot memes around flooting in the LessWrong memeplex that are there in a basic level but that most people don’t take to their full conclusion.
So, how might we find that all these ideas are massively wrong?
It’s a topic that’s very difficult to talk about. Basically you try out different ideas and look at the effects of those ideas in the real world.
Mainly because of QS data I delved into the system of Somato-Psychoeducation. The data I measured was improvement in a health variable. It was enough to get over the initial barrier to go inside the system. But know I can think inside the system and there a lot going on which I can’t put into good metrics.
There however no way to explain the framework in an article. Most people who read the introductory book don’t get the point before they spent years experiencing the system from the inside.
It’s the very nature of those really things outside the memeplex that there not easily expressible by ideas inside the memeplex in a way that won’t be misunderstood.
It could be that it’s just impossible to build a safe FAI under the utilitarian framework and all AGI’s are UFAIs.
That’s not LW-memeplex being wrong, that’s just a LW-meme which is slightly more pessimistic than the more customary “the vast majority of all UFAI’s are unfriendly but we might be able to make this work” view. I don’t think any high profile LWers who believed this would be absolutely shocked at finding out that it was too optimistic.
MIRI-LW being plausibly wrong about AI friendliness is more like, “Actually, all the fears about unfriendly AI were completely overblown. Self-improving AI don’t actually “FOOM” dramatically … they simply get smarter at the same exponential rate that the rest of the humans+tech system has been getting smarter all this time. There isn’t much practical danger of them rapidly outracing the rest of the system and seizing power and turning us all into paperclips, or anything like that.”
If that sort of thing were true, it would imply that a lot of prominent rationalists have been wasting time (or at least, doing things which end up being useful for reasons entirely different than the reasons that they were supposed to be useful for)
If it’s impossible to build FAI that might mean that one should in general discourage technological development to prevent AGI from being build.
It might building moral framework that allow for effective prevention of technological development. I do think that’s significantly differs from the current LW-memeplex.
What I mean is...the difference between “FAI is possible but difficult” and “FAI is impossible and all AI are uFAI” is like the difference between “A narrow subset of people go to heaven instead of hell” and ” and “every human goes to hell”. Those two beliefs are mostly identical
Whereas “FOOM doesn’t happen and there is no reason to worry about AI so much” is analogous to “belief in afterlife is unfounded in the first place”. That″s a massively different idea.
In one case, you’re committing a little heresy within a belief system. In the other, the entire theoretical paradigm was flawed to begin with. If it turns out that “all AI are UFAI” is true, then Lesswrong/MIRI would still be a lot more correct about things than most other people interested in futurology / transhumanism because they got the basic theoretical paradigm right. (Just like, if it turned out hell existed but not heaven, religionists of many stripes would still have reason to be fairly smug about the accuracy of their predictions even if none of the actions they advocated made a difference)
like the difference between “A narrow subset of people go to heaven instead of hell” and ” and “every human goes to hell”. Those two beliefs are mostly identical
Mostly identical as far as theology is concerned, but very different in terms of the optimal action. In the first case, you want (from a selfish-utilitarian standpoint) to ensure that you’re in the narrow subset. In the second, you want to overthrow the system.
It could be that it’s just impossible to build a safe FAI under the utilitarian framework and all AGI’s are UFAIs.
Otherwise the LessWrong memeplex has the advantage of being very diverse. When it comes to a subject like politics we do have people with mainstream views but we also have people who think that democracy is wrong. Having such a diversity of ideas makes it difficult for all of LessWrong to be wrong.
Some people paint a picture of LessWrong as a crowd of people who believe that everyone should do cryonics. In reality most the the participants aren’t signed up for cryonics.
Take a figure like Nassim Taleb. He’s frequently quoted on LessWrong so he’s not really outside the LessWrong memeplex. But he’s also a Christian.
There are a lot memes around flooting in the LessWrong memeplex that are there in a basic level but that most people don’t take to their full conclusion.
It’s a topic that’s very difficult to talk about. Basically you try out different ideas and look at the effects of those ideas in the real world. Mainly because of QS data I delved into the system of Somato-Psychoeducation. The data I measured was improvement in a health variable. It was enough to get over the initial barrier to go inside the system. But know I can think inside the system and there a lot going on which I can’t put into good metrics.
There however no way to explain the framework in an article. Most people who read the introductory book don’t get the point before they spent years experiencing the system from the inside.
It’s the very nature of those really things outside the memeplex that there not easily expressible by ideas inside the memeplex in a way that won’t be misunderstood.
That’s not LW-memeplex being wrong, that’s just a LW-meme which is slightly more pessimistic than the more customary “the vast majority of all UFAI’s are unfriendly but we might be able to make this work” view. I don’t think any high profile LWers who believed this would be absolutely shocked at finding out that it was too optimistic.
MIRI-LW being plausibly wrong about AI friendliness is more like, “Actually, all the fears about unfriendly AI were completely overblown. Self-improving AI don’t actually “FOOM” dramatically … they simply get smarter at the same exponential rate that the rest of the humans+tech system has been getting smarter all this time. There isn’t much practical danger of them rapidly outracing the rest of the system and seizing power and turning us all into paperclips, or anything like that.”
If that sort of thing were true, it would imply that a lot of prominent rationalists have been wasting time (or at least, doing things which end up being useful for reasons entirely different than the reasons that they were supposed to be useful for)
If it’s impossible to build FAI that might mean that one should in general discourage technological development to prevent AGI from being build.
It might building moral framework that allow for effective prevention of technological development. I do think that’s significantly differs from the current LW-memeplex.
What I mean is...the difference between “FAI is possible but difficult” and “FAI is impossible and all AI are uFAI” is like the difference between “A narrow subset of people go to heaven instead of hell” and ” and “every human goes to hell”. Those two beliefs are mostly identical
Whereas “FOOM doesn’t happen and there is no reason to worry about AI so much” is analogous to “belief in afterlife is unfounded in the first place”. That″s a massively different idea.
In one case, you’re committing a little heresy within a belief system. In the other, the entire theoretical paradigm was flawed to begin with. If it turns out that “all AI are UFAI” is true, then Lesswrong/MIRI would still be a lot more correct about things than most other people interested in futurology / transhumanism because they got the basic theoretical paradigm right. (Just like, if it turned out hell existed but not heaven, religionists of many stripes would still have reason to be fairly smug about the accuracy of their predictions even if none of the actions they advocated made a difference)
Mostly identical as far as theology is concerned, but very different in terms of the optimal action. In the first case, you want (from a selfish-utilitarian standpoint) to ensure that you’re in the narrow subset. In the second, you want to overthrow the system.