You can divide inputs into grabby and non-grabby, existent and non-existent, ASI and AGI and outcomes into all manner of dystopia or nonexistence, and probably carve up most of hypothesis space. You can do this with basically any subject.
But if you think you can reason about respective probabilities in these fields in a way that isn’t equivalent to fanfiction, you are insane.
“My current probability is something like 90% that if you produced hundreds of random uncorrelated superintelligent AI systems, <1% of them would be conscious.”
This is what I’m talking about. Have you ever heard of the hard problem of consciousness? Have we ever observed a superintelligent AI? Have we ever generated hundreds of them? Do we know how we would go about generating hundreds of superintelligent AI? Is there any convergence with how superintelligences develop?
Of course, there’s a very helpful footnote saying “I’m not certain about this,” so we can say “well he’s just refining his thinking!”
It struck me today that maybe you’re mistaking this exercise in trying to explain ones position with giving precise, workable predictions.
If you interpret “My current probability is something like 90% that if you produced hundreds of random uncorrelated superintelligent AI systems, <1% of them would be conscious.” as a prediction of what will happen, then yes, this does seem somewhat ludicrous. On the other hand, you can also interpret it as “I’m pretty sure (on the basis of various intuitions etc.) that the vast majority of possible superintelligences aren’t conscious”. This isn’t an objective statement of what will happen, it’s an attempt to describe subjective beliefs in a way that other people can know how much you believe a given thing.
On the other hand, you can also interpret it as “I’m pretty sure (on the basis of various intuitions etc.) that the vast majority of possible superintelligences aren’t conscious”. This isn’t an objective statement of what will happen
What do you mean by saying that this is not an objective statement or a prediction?
Are you saying that you think there’s no underlying truth to consciousness?
We know it’s measurable, because that’s basically ‘I think therefore I am.’ It’s not impossible that someday we could come up with a machine or algorithm which can measure consciousness, so it’s not impossible that this ‘non-prediction’ or ‘subjective statement’ could be proved objectively wrong.
My most charitable reading of your comment is that you’re saying that the post is highly speculative and based off of ‘subjective’ (read: arbitrary) judgements. This is my position, that’s what I just said. It’s fanfiction.
I think even if you were to put at the start “this is just speculation, and highly uncertain” it would still be inappropriate content for a site about thinking rationality, for a variety of reasons, one of which being that people will base their own beliefs on your subjective judgments or otherwise be biased by them.
And even when you speculate, you should never be assigning 90% probability to a prediction about CONSCIOUSNESS and SUPERINTELLIGENT AI.
God, it just hit me again how insane that is.
“I think that [property we can not currently objectively measure] will not be present in [agent we have not observed], and I think that I could make 10 predictions of similar uncertainty and be wrong only once.”
You can divide inputs into grabby and non-grabby, existent and non-existent, ASI and AGI and outcomes into all manner of dystopia or nonexistence, and probably carve up most of hypothesis space. You can do this with basically any subject.
But if you think you can reason about respective probabilities in these fields in a way that isn’t equivalent to fanfiction, you are insane.
“My current probability is something like 90% that if you produced hundreds of random uncorrelated superintelligent AI systems, <1% of them would be conscious.”
This is what I’m talking about. Have you ever heard of the hard problem of consciousness? Have we ever observed a superintelligent AI? Have we ever generated hundreds of them? Do we know how we would go about generating hundreds of superintelligent AI? Is there any convergence with how superintelligences develop?
Of course, there’s a very helpful footnote saying “I’m not certain about this,” so we can say “well he’s just refining his thinking!”
No he’s not, he’s writing fanfiction.
It struck me today that maybe you’re mistaking this exercise in trying to explain ones position with giving precise, workable predictions.
If you interpret “My current probability is something like 90% that if you produced hundreds of random uncorrelated superintelligent AI systems, <1% of them would be conscious.” as a prediction of what will happen, then yes, this does seem somewhat ludicrous. On the other hand, you can also interpret it as “I’m pretty sure (on the basis of various intuitions etc.) that the vast majority of possible superintelligences aren’t conscious”. This isn’t an objective statement of what will happen, it’s an attempt to describe subjective beliefs in a way that other people can know how much you believe a given thing.
What do you mean by saying that this is not an objective statement or a prediction?
Are you saying that you think there’s no underlying truth to consciousness?
We know it’s measurable, because that’s basically ‘I think therefore I am.’ It’s not impossible that someday we could come up with a machine or algorithm which can measure consciousness, so it’s not impossible that this ‘non-prediction’ or ‘subjective statement’ could be proved objectively wrong.
My most charitable reading of your comment is that you’re saying that the post is highly speculative and based off of ‘subjective’ (read: arbitrary) judgements. This is my position, that’s what I just said. It’s fanfiction.
I think even if you were to put at the start “this is just speculation, and highly uncertain” it would still be inappropriate content for a site about thinking rationality, for a variety of reasons, one of which being that people will base their own beliefs on your subjective judgments or otherwise be biased by them.
And even when you speculate, you should never be assigning 90% probability to a prediction about CONSCIOUSNESS and SUPERINTELLIGENT AI.
God, it just hit me again how insane that is.
“I think that [property we can not currently objectively measure] will not be present in [agent we have not observed], and I think that I could make 10 predictions of similar uncertainty and be wrong only once.”