These types of posts are what drive me to largely regard lesswrong as unserious. Solve the immediate problem of AGI, and then we can talk about whatever sci-fi bullcrap you want to.
Foxes > Hedgehogs.
You’ll learn a lot more about the future paying attention to what’s happening right now than by wild extrapolation.
These types of posts are what drive me to largely regard lesswrong as unserious.
Do you think that there are specific falsehoods in the OP? Or do you just think it’s unrespectable for humans to think about the future?
Solve the immediate problem of AGI, and then we can talk about whatever sci-fi bullcrap you want to.
Some people object to working on AGI alignment on the grounds that the future will go a lot better if we take our hands off the steering wheel and let minds develop “naturally” and “freely”.
Some of those people even own companies with names like “Google”!
The best way to address that family of views is to actually talk about what would probably happen if you let a random misaligned AGI, or a random alien, optimize the future.
Foxes > Hedgehogs.
So on your understanding, “foxes” = people who have One Big Theory about which topics are respectable, and answer all futurist questions based on that theory? While “hedgehogs” = people who write long, detailed blog posts poking at various nuances and sub-nuances of a long list of loosely related object-level questions?
… Seems to me that you’re either very confused about what “foxes” and “hedgehogs” are, or you didn’t understand much of the OP’s post.
Writing a long post about a topic doesn’t imply that you’re using One Simple Catch-All Model to generate all the predictions, and it doesn’t imply that you’re confident about the contents of the post. Refusing to think about a topic isn’t being a “fox”.
You’ll learn a lot more about the future paying attention to what’s happening right now than by wild extrapolation.
Because as all foxes know, “thinking about the present” and “thinking about the future” are mutually exclusive.
There is one ultimate law of futurology, and it’s that predicting the future is very hard, and as you extend timelines out 100-500 million years it gets harder.
If your hypothetical future involves both aliens and AGI, both of which are agents (emphasis emphasized)we have never observed and cannot really model in any way, you are not describing anything that can be called truth.
You are throwing a dart at an ocean of hypothesis space and hoping to hit a specific starfish that lives off the coast of Australia.
Looking at the agents that aren’t hypothetical, i.e. biology, one thing that they tend to have in common is resource hogging. Mainly for the simple reason that those which didn’t try to get as much of the pie as possible tend to get outcompeted by those which did. So while you can’t tell for sure what any hypothetical aliens would be like, it’s certainly plausible to model them as wanting to collect resources, as that’s pretty much universal. At least among those agents that tend to spread. This suggests that if there are expansionary aliens around, they’re likely to be the kind that would also like our resources (this is where Hanson’s grabby aliens come in). Looking at the only data we have, this tends to end badly for the existent species if the newcomers have better abilities (for lack of a better phrase).
Any potential aliens will be, well, alien, which if I understand correctly is sort of your point. This means that they’re likely to have totally different values etc. and would have a totally different vision of what the universe should look like. This would be fascinating, as long as it isn’t at the cost of human values.
The same argumentation applies for AGI—if (or when) it appears, it stands to reason that it’ll go for power at the cost of humans. That could be fine, as long as it did nice things (like in the Culture books), but that’s throwing a dart at an ocean of hypothesis space and hoping to hit a specific starfish that lives off the coast of Australia.
This post isn’t telling a very specific story that requires multiple things to go exactly right. It’s considering the possible ways to partition the hypothesis space with the assumptions that AGI and aliens are possible. And then trying to put a number on them. Nate specifically said that these weren’t hard calculations, just a way to be more precise about what he thinks.
You can divide inputs into grabby and non-grabby, existent and non-existent, ASI and AGI and outcomes into all manner of dystopia or nonexistence, and probably carve up most of hypothesis space. You can do this with basically any subject.
But if you think you can reason about respective probabilities in these fields in a way that isn’t equivalent to fanfiction, you are insane.
“My current probability is something like 90% that if you produced hundreds of random uncorrelated superintelligent AI systems, <1% of them would be conscious.”
This is what I’m talking about. Have you ever heard of the hard problem of consciousness? Have we ever observed a superintelligent AI? Have we ever generated hundreds of them? Do we know how we would go about generating hundreds of superintelligent AI? Is there any convergence with how superintelligences develop?
Of course, there’s a very helpful footnote saying “I’m not certain about this,” so we can say “well he’s just refining his thinking!”
It struck me today that maybe you’re mistaking this exercise in trying to explain ones position with giving precise, workable predictions.
If you interpret “My current probability is something like 90% that if you produced hundreds of random uncorrelated superintelligent AI systems, <1% of them would be conscious.” as a prediction of what will happen, then yes, this does seem somewhat ludicrous. On the other hand, you can also interpret it as “I’m pretty sure (on the basis of various intuitions etc.) that the vast majority of possible superintelligences aren’t conscious”. This isn’t an objective statement of what will happen, it’s an attempt to describe subjective beliefs in a way that other people can know how much you believe a given thing.
On the other hand, you can also interpret it as “I’m pretty sure (on the basis of various intuitions etc.) that the vast majority of possible superintelligences aren’t conscious”. This isn’t an objective statement of what will happen
What do you mean by saying that this is not an objective statement or a prediction?
Are you saying that you think there’s no underlying truth to consciousness?
We know it’s measurable, because that’s basically ‘I think therefore I am.’ It’s not impossible that someday we could come up with a machine or algorithm which can measure consciousness, so it’s not impossible that this ‘non-prediction’ or ‘subjective statement’ could be proved objectively wrong.
My most charitable reading of your comment is that you’re saying that the post is highly speculative and based off of ‘subjective’ (read: arbitrary) judgements. This is my position, that’s what I just said. It’s fanfiction.
I think even if you were to put at the start “this is just speculation, and highly uncertain” it would still be inappropriate content for a site about thinking rationality, for a variety of reasons, one of which being that people will base their own beliefs on your subjective judgments or otherwise be biased by them.
And even when you speculate, you should never be assigning 90% probability to a prediction about CONSCIOUSNESS and SUPERINTELLIGENT AI.
God, it just hit me again how insane that is.
“I think that [property we can not currently objectively measure] will not be present in [agent we have not observed], and I think that I could make 10 predictions of similar uncertainty and be wrong only once.”
Ten years ago I expressed similar misgivings. Such scenarios, no matter how ‘logical’, are too easily invalidated by something not yet known. Better e.g. to treat them as strongly hypothetical, and the problem of superintelligent AI as ‘almost certainly not hypothetical’. But, we face the future with the institutions we have, not the institutions we wish we have, and part of the culture of MIRI et al is an attachment to particular scenarios of the long-term future. So be it.
These types of posts are what drive me to largely regard lesswrong as unserious. Solve the immediate problem of AGI, and then we can talk about whatever sci-fi bullcrap you want to.
Foxes > Hedgehogs.
You’ll learn a lot more about the future paying attention to what’s happening right now than by wild extrapolation.
Do you think that there are specific falsehoods in the OP? Or do you just think it’s unrespectable for humans to think about the future?
Some people object to working on AGI alignment on the grounds that the future will go a lot better if we take our hands off the steering wheel and let minds develop “naturally” and “freely”.
Some of those people even own companies with names like “Google”!
The best way to address that family of views is to actually talk about what would probably happen if you let a random misaligned AGI, or a random alien, optimize the future.
So on your understanding, “foxes” = people who have One Big Theory about which topics are respectable, and answer all futurist questions based on that theory? While “hedgehogs” = people who write long, detailed blog posts poking at various nuances and sub-nuances of a long list of loosely related object-level questions?
… Seems to me that you’re either very confused about what “foxes” and “hedgehogs” are, or you didn’t understand much of the OP’s post.
Writing a long post about a topic doesn’t imply that you’re using One Simple Catch-All Model to generate all the predictions, and it doesn’t imply that you’re confident about the contents of the post. Refusing to think about a topic isn’t being a “fox”.
Because as all foxes know, “thinking about the present” and “thinking about the future” are mutually exclusive.
There is one ultimate law of futurology, and it’s that predicting the future is very hard, and as you extend timelines out 100-500 million years it gets harder.
If your hypothetical future involves both aliens and AGI, both of which are agents (emphasis emphasized) we have never observed and cannot really model in any way, you are not describing anything that can be called truth.
You are throwing a dart at an ocean of hypothesis space and hoping to hit a specific starfish that lives off the coast of Australia.
It’s not a question, you’re wrong.
Looking at the agents that aren’t hypothetical, i.e. biology, one thing that they tend to have in common is resource hogging. Mainly for the simple reason that those which didn’t try to get as much of the pie as possible tend to get outcompeted by those which did. So while you can’t tell for sure what any hypothetical aliens would be like, it’s certainly plausible to model them as wanting to collect resources, as that’s pretty much universal. At least among those agents that tend to spread. This suggests that if there are expansionary aliens around, they’re likely to be the kind that would also like our resources (this is where Hanson’s grabby aliens come in). Looking at the only data we have, this tends to end badly for the existent species if the newcomers have better abilities (for lack of a better phrase).
Any potential aliens will be, well, alien, which if I understand correctly is sort of your point. This means that they’re likely to have totally different values etc. and would have a totally different vision of what the universe should look like. This would be fascinating, as long as it isn’t at the cost of human values.
The same argumentation applies for AGI—if (or when) it appears, it stands to reason that it’ll go for power at the cost of humans. That could be fine, as long as it did nice things (like in the Culture books), but that’s throwing a dart at an ocean of hypothesis space and hoping to hit a specific starfish that lives off the coast of Australia.
This post isn’t telling a very specific story that requires multiple things to go exactly right. It’s considering the possible ways to partition the hypothesis space with the assumptions that AGI and aliens are possible. And then trying to put a number on them. Nate specifically said that these weren’t hard calculations, just a way to be more precise about what he thinks.
You can divide inputs into grabby and non-grabby, existent and non-existent, ASI and AGI and outcomes into all manner of dystopia or nonexistence, and probably carve up most of hypothesis space. You can do this with basically any subject.
But if you think you can reason about respective probabilities in these fields in a way that isn’t equivalent to fanfiction, you are insane.
“My current probability is something like 90% that if you produced hundreds of random uncorrelated superintelligent AI systems, <1% of them would be conscious.”
This is what I’m talking about. Have you ever heard of the hard problem of consciousness? Have we ever observed a superintelligent AI? Have we ever generated hundreds of them? Do we know how we would go about generating hundreds of superintelligent AI? Is there any convergence with how superintelligences develop?
Of course, there’s a very helpful footnote saying “I’m not certain about this,” so we can say “well he’s just refining his thinking!”
No he’s not, he’s writing fanfiction.
It struck me today that maybe you’re mistaking this exercise in trying to explain ones position with giving precise, workable predictions.
If you interpret “My current probability is something like 90% that if you produced hundreds of random uncorrelated superintelligent AI systems, <1% of them would be conscious.” as a prediction of what will happen, then yes, this does seem somewhat ludicrous. On the other hand, you can also interpret it as “I’m pretty sure (on the basis of various intuitions etc.) that the vast majority of possible superintelligences aren’t conscious”. This isn’t an objective statement of what will happen, it’s an attempt to describe subjective beliefs in a way that other people can know how much you believe a given thing.
What do you mean by saying that this is not an objective statement or a prediction?
Are you saying that you think there’s no underlying truth to consciousness?
We know it’s measurable, because that’s basically ‘I think therefore I am.’ It’s not impossible that someday we could come up with a machine or algorithm which can measure consciousness, so it’s not impossible that this ‘non-prediction’ or ‘subjective statement’ could be proved objectively wrong.
My most charitable reading of your comment is that you’re saying that the post is highly speculative and based off of ‘subjective’ (read: arbitrary) judgements. This is my position, that’s what I just said. It’s fanfiction.
I think even if you were to put at the start “this is just speculation, and highly uncertain” it would still be inappropriate content for a site about thinking rationality, for a variety of reasons, one of which being that people will base their own beliefs on your subjective judgments or otherwise be biased by them.
And even when you speculate, you should never be assigning 90% probability to a prediction about CONSCIOUSNESS and SUPERINTELLIGENT AI.
God, it just hit me again how insane that is.
“I think that [property we can not currently objectively measure] will not be present in [agent we have not observed], and I think that I could make 10 predictions of similar uncertainty and be wrong only once.”
Ten years ago I expressed similar misgivings. Such scenarios, no matter how ‘logical’, are too easily invalidated by something not yet known. Better e.g. to treat them as strongly hypothetical, and the problem of superintelligent AI as ‘almost certainly not hypothetical’. But, we face the future with the institutions we have, not the institutions we wish we have, and part of the culture of MIRI et al is an attachment to particular scenarios of the long-term future. So be it.