That’s awfully vague. “Whatever window of time we had”, what does that mean?
There’s one kind of “technological progress” that SIAI opposes as far as I can tell: working on AGI without an explicit focus on Friendliness. Now if you happen to think that AGI is a must-have to ensure the long-term survival of humanity, it seems to me that you’re already pretty much on board with the essential parts of SIAI’s worldview, indistinguishable from them as far as the vast majority is concerned.
Otherwise, there’s plenty of tech that is entirely orthogonal with the claims of SIAI: cheap energy, health, MNT, improving software engineering (so-called), and so on.
That’s awfully vague. “Whatever window of time we had”, what does that mean?
The current state of the world is unusually conducive to technological progress. We don’t know how long this state of affairs will last. Maybe a long time, maybe a short time. To fail to make progress as rapidly as we can is to gamble the entire future of intelligent life on it lasting a long time, without evidence that it will do so. I don’t think that’s a good gamble.
There’s one kind of “technological progress” that SIAI opposes as far as I can tell: working on AGI without an explicit focus on Friendliness.
I have seen claims to the contrary from a number of people, from Eliezer himself a number of years ago up to another reply to your comment right now. If SIAI were to officially endorse the position you just suggested, my assessment of their expected utility would significantly increase.
Well, SIAI isn’t necessarily a homogenous bunch of people, with respect to what they oppose or endorse, but did you look for instance at Michael Anissimov’s entries on MNT? (Focusing on that because it’s the topic of Risto’s comment and you seem to see that as a confirmation of your thesis.) You don’t get the impression that he thinks it’s a bad idea, quite the contrary: http://www.acceleratingfuture.com/michael/blog/category/nanotechnology/
Here is Eliezer on the SL4 mailing list:
If you solve the FAI problem, you probably solve the nanotech problem. If you solve the nanotech problem, you probably make the AI problem much worse. My preference for solving the AI problem as quickly as possible has nothing to do with the relative danger of AI and nanotech. It’s about the optimal ordering of AI and nanotech.
The Luddites of our times are (for instance) groups like the publishing and music industries, the use of that label to describe the opinions of people affiliated with SIAI just doesn’t make sense IMO.
Human-implemented molecular nanotechnology is a bad idea. I just talk about it to attract people in who think it’s important. MNT knowledge is a good filter/generator for SL3 and beyond thinkers.
MNT without friendly superintelligence would be nothing but a disaster.
It’s true that SIAI isn’t homogeneous though. For instance, Anna is much more optimistic about uploads than I am personally.
Thanks for the link, yes, that does seem to be a different opinion (and some very interesting posts).
I agree with you about the publishing and music industries. I consider current rampant abuse of intellectual property law to be a bigger threat than the Singularity meme, sufficiently so that if your comparative advantage is in politics, opposing that abuse probably has the highest expected utility of anything you could be doing.
Molecular nanotechnology and anything else that can be weaponized to let a very small group of people effectively kill a very large group of people is probably something SIAI-type people would like to be countered with a global sysop scenario from the moment it gets developed.
That’s awfully vague. “Whatever window of time we had”, what does that mean?
There’s one kind of “technological progress” that SIAI opposes as far as I can tell: working on AGI without an explicit focus on Friendliness. Now if you happen to think that AGI is a must-have to ensure the long-term survival of humanity, it seems to me that you’re already pretty much on board with the essential parts of SIAI’s worldview, indistinguishable from them as far as the vast majority is concerned.
Otherwise, there’s plenty of tech that is entirely orthogonal with the claims of SIAI: cheap energy, health, MNT, improving software engineering (so-called), and so on.
The current state of the world is unusually conducive to technological progress. We don’t know how long this state of affairs will last. Maybe a long time, maybe a short time. To fail to make progress as rapidly as we can is to gamble the entire future of intelligent life on it lasting a long time, without evidence that it will do so. I don’t think that’s a good gamble.
I have seen claims to the contrary from a number of people, from Eliezer himself a number of years ago up to another reply to your comment right now. If SIAI were to officially endorse the position you just suggested, my assessment of their expected utility would significantly increase.
Well, SIAI isn’t necessarily a homogenous bunch of people, with respect to what they oppose or endorse, but did you look for instance at Michael Anissimov’s entries on MNT? (Focusing on that because it’s the topic of Risto’s comment and you seem to see that as a confirmation of your thesis.) You don’t get the impression that he thinks it’s a bad idea, quite the contrary: http://www.acceleratingfuture.com/michael/blog/category/nanotechnology/
Here is Eliezer on the SL4 mailing list:
The Luddites of our times are (for instance) groups like the publishing and music industries, the use of that label to describe the opinions of people affiliated with SIAI just doesn’t make sense IMO.
Human-implemented molecular nanotechnology is a bad idea. I just talk about it to attract people in who think it’s important. MNT knowledge is a good filter/generator for SL3 and beyond thinkers.
MNT without friendly superintelligence would be nothing but a disaster.
It’s true that SIAI isn’t homogeneous though. For instance, Anna is much more optimistic about uploads than I am personally.
Thanks for the link, yes, that does seem to be a different opinion (and some very interesting posts).
I agree with you about the publishing and music industries. I consider current rampant abuse of intellectual property law to be a bigger threat than the Singularity meme, sufficiently so that if your comparative advantage is in politics, opposing that abuse probably has the highest expected utility of anything you could be doing.
Molecular nanotechnology and anything else that can be weaponized to let a very small group of people effectively kill a very large group of people is probably something SIAI-type people would like to be countered with a global sysop scenario from the moment it gets developed.