Let me clarify, is your conclusion then that basically we should support the genocide of the whole of humanity because the alternative would be way worse? Are you offering some other alternatives except of that? Maybe a better and less apocalyptic conclusion would be to advocate against building any type of AI that’s more advanced than we have today like some people already do? Do you think there’s any chance for that? Because I don’t and from what you said it sounds like the only conclusion is that the only future for us is that we all die from the hands of Clippy.
Yes. The only other alternative I could see is finding some way to avoid singleton until humanity eventually goes extinct naturally, but I don’t think that’s likely. Advocating against AI would be a reasonable response but I don’t think it will work forever, technology marches on.
Every species goes extinct, and some have already gone extinct by being victims of their own success. The singularity is something which theoretically has the potential to give humanity, and potentially other species, a fate far better or far worse than extinction. I believe that the far worse fate is far more likely given what I know about humanity and our track record with power. Therefore I am against the singularity “elevating” humanity or other species away from extinction, which means I must logically support extinction instead since it is the only alternative.
Edit: People seem to disagree more strongly with this comment than anything else I said, even though it seems to follow logically. I’d like a discussion on this specific point and why people are taking issue with it.
Death for sure is better than a chance of something else because it MIGHT be worse than death.
You have failed to make a convincing argument that “eternal suffering” is a likely outcome out of the possibility space.
It’s not a stable equilibrium. Clippy is stable and convergent, irrational humans making others suffer is not. So it doesn’t occupy more than a teensy amount of the possible outcome space.
A plausible outcome is one where an elite few who own the AGI companies get incredibly wealthy while the bulk of the population gets nutrient paste in modular apartments. But they also get autodocs and aging inhibitor pills.
One does not have to be “irrational” to make others suffer. One just has to value their suffering, or not care and allow them to suffer for some other reason. There are quite a few tendencies in humanity which would lead to this, among them
Desire for revenge or “justice” for perceived or real wrongs
Desire to flex power
Sheer sadism
Nostalgia for an old world with lots of suffering
Belief in “the natural order” as an intrinsic good
Exploitation for selfish motives, e.g. sexual exploitation
Belief in the goodness of life no matter how awful the circumstances
Philosophical belief in suffering as a good thing which brings meaning to life
I’m not denying it could happen. Give a few specific humans unlimited power—a controllable AGI—and this could be the outcome.
I’m not seeing where this devolves do “lay down and accept extinction (and personal death)”
Think of all the humans before you who made it possible for you to exist. The human tribes who managed to escape the last ice age and just barely keep the population viable. The ones who developed the scientific method and the steam engine and created the industrial revolution rather than 1000 more years of near stasis. The humans in the cold war who kept their fingers off the nuclear launch triggers even when things got tense.
And now you’re saying after all that “ok well I’m fine with death for our whole species, let’s not waste any time on preventing it”.
The reason my position “devolves into” accepting extinction is because horrific suffering following singularity seems nearly inevitable. Every society which has yet existed has had horrific problems, and every one of them would be made almost unimaginably worse if they had access to value lock-in or mind uploading. I don’t see any reason to believe that our society today, or whatever it might be in 15-50 years or however long your AGI timeline is, should be the one exception? The problem is far more broad than just a few specific humans: if only a few people held evil values(or values accepting of evil, which is basically the same given absolute power) at any given time, it would be easy for the rest of society to prevent them from doing harm. You say “maybe we can’t (save our species from extinction) but we have to try.” But my argument isn’t that we can’t, it’s that we maybe can, and the scenarios where we do are worse. My problem with shooting for AI alignment isn’t that it’s “wasting time” or that it’s too hard, it’s that shooting for a utopia is far more likely to lead to a dystopia.
I don’t think my position of accepting extinction is as defeatist or nihilistic as it seems at first glance. At least, not more so than the default “normie” position might be. Every person who isn’t born right before immortality tech needs to accept death, and every species that doesn’t achieve singularity needs to accept extinction.
The way you speak about our ancestors suggests a strange way of thinking about them and their motivations. You speak about past societies, including tribes who managed to escape the ice age, as though they were all motivated by a desire to attain some ultimate end-state of humanity, and that if we don’t shoot for that, we’d be betraying the wishes of everybody who worked so hard to get us here. But those tribesmen who survived the ice age weren’t thinking about the glorious technological future, or conquering the universe, or the fate of humanity tens of thousands of years down the line. They wanted to survive, and to improve life for themselves and their immediate descendants, and to spread whatever cultural values they happened to believe in at the time. That’s not wrong or anything, I’m just saying that’s what people have been mostly motivated by for most of history. Each of our ancestors either succeeded or failed at this, but that’s in the past and there’s nothing we can do about it now.
To speak about what we should do based on what our ancestors would have wanted in the past is to accept the conservative argument that values shouldn’t change just because people fought hard in the past to keep them. What matters going forward is the people now and in the future, because that’s what we have influence over.
I’d don’t know what your values are, but under my values, I disagree hard, primarily because under my values history has shown the opposite: despite real losses and horror in the 19th and 20th centuries, the overall trend is my values are being satisfied more and more. Democracies, arguably one of the key developments, aligned states to their citizenry far more than any government in history, and the results have been imperfectly good for my values since they spread.
Or in the words of Martin Luther King misquoted: “In the asymptotic limit of technology, the arc of the universe bends towards justice.”
What matters going forward is the people now and in the future, because that’s what we have influence over.
Right. And I think I want an AGI system that acts in a bounded way, with airtight theoretically correct boundaries I set, to reduce the misery me and my fellow humans suffer.
Starting with typing software source code by hand and later I’d like some AI help with factory labor and later still some help researching biology to look for the fountain of youth with some real tools.
This is a perfectly reasonable thing to do and technical alignment where each increasingly capable AI system is heavily constrained in what it’s allowed to do and how it uses it’s computational resources follows naturally.
An AI system that is sparse and simple happens to be cheaper to run and easier to debug. This also happens to reduce it’s ability to plot against us.
We should do that, and people against us...well...
Obviously we need deployable security drones. On isolated networks using narrow AIs onboard for their targeting and motion control.
Let me clarify, is your conclusion then that basically we should support the genocide of the whole of humanity because the alternative would be way worse? Are you offering some other alternatives except of that? Maybe a better and less apocalyptic conclusion would be to advocate against building any type of AI that’s more advanced than we have today like some people already do? Do you think there’s any chance for that? Because I don’t and from what you said it sounds like the only conclusion is that the only future for us is that we all die from the hands of Clippy.
Yes. The only other alternative I could see is finding some way to avoid singleton until humanity eventually goes extinct naturally, but I don’t think that’s likely. Advocating against AI would be a reasonable response but I don’t think it will work forever, technology marches on.
Every species goes extinct, and some have already gone extinct by being victims of their own success. The singularity is something which theoretically has the potential to give humanity, and potentially other species, a fate far better or far worse than extinction. I believe that the far worse fate is far more likely given what I know about humanity and our track record with power. Therefore I am against the singularity “elevating” humanity or other species away from extinction, which means I must logically support extinction instead since it is the only alternative.
Edit: People seem to disagree more strongly with this comment than anything else I said, even though it seems to follow logically. I’d like a discussion on this specific point and why people are taking issue with it.
Because what you are saying boils down to :
Death for sure is better than a chance of something else because it MIGHT be worse than death.
You have failed to make a convincing argument that “eternal suffering” is a likely outcome out of the possibility space.
It’s not a stable equilibrium. Clippy is stable and convergent, irrational humans making others suffer is not. So it doesn’t occupy more than a teensy amount of the possible outcome space.
A plausible outcome is one where an elite few who own the AGI companies get incredibly wealthy while the bulk of the population gets nutrient paste in modular apartments. But they also get autodocs and aging inhibitor pills.
This is still better than extinction.
One does not have to be “irrational” to make others suffer. One just has to value their suffering, or not care and allow them to suffer for some other reason. There are quite a few tendencies in humanity which would lead to this, among them
Desire for revenge or “justice” for perceived or real wrongs
Desire to flex power
Sheer sadism
Nostalgia for an old world with lots of suffering
Belief in “the natural order” as an intrinsic good
Exploitation for selfish motives, e.g. sexual exploitation
Belief in the goodness of life no matter how awful the circumstances
Philosophical belief in suffering as a good thing which brings meaning to life
Religious or political doctrine
Others I haven’t thought of right now
I’m not denying it could happen. Give a few specific humans unlimited power—a controllable AGI—and this could be the outcome.
I’m not seeing where this devolves do “lay down and accept extinction (and personal death)”
Think of all the humans before you who made it possible for you to exist. The human tribes who managed to escape the last ice age and just barely keep the population viable. The ones who developed the scientific method and the steam engine and created the industrial revolution rather than 1000 more years of near stasis. The humans in the cold war who kept their fingers off the nuclear launch triggers even when things got tense.
And now you’re saying after all that “ok well I’m fine with death for our whole species, let’s not waste any time on preventing it”.
Maybe we can’t but we have to try.
The reason my position “devolves into” accepting extinction is because horrific suffering following singularity seems nearly inevitable. Every society which has yet existed has had horrific problems, and every one of them would be made almost unimaginably worse if they had access to value lock-in or mind uploading. I don’t see any reason to believe that our society today, or whatever it might be in 15-50 years or however long your AGI timeline is, should be the one exception? The problem is far more broad than just a few specific humans: if only a few people held evil values(or values accepting of evil, which is basically the same given absolute power) at any given time, it would be easy for the rest of society to prevent them from doing harm. You say “maybe we can’t (save our species from extinction) but we have to try.” But my argument isn’t that we can’t, it’s that we maybe can, and the scenarios where we do are worse. My problem with shooting for AI alignment isn’t that it’s “wasting time” or that it’s too hard, it’s that shooting for a utopia is far more likely to lead to a dystopia.
I don’t think my position of accepting extinction is as defeatist or nihilistic as it seems at first glance. At least, not more so than the default “normie” position might be. Every person who isn’t born right before immortality tech needs to accept death, and every species that doesn’t achieve singularity needs to accept extinction.
The way you speak about our ancestors suggests a strange way of thinking about them and their motivations. You speak about past societies, including tribes who managed to escape the ice age, as though they were all motivated by a desire to attain some ultimate end-state of humanity, and that if we don’t shoot for that, we’d be betraying the wishes of everybody who worked so hard to get us here. But those tribesmen who survived the ice age weren’t thinking about the glorious technological future, or conquering the universe, or the fate of humanity tens of thousands of years down the line. They wanted to survive, and to improve life for themselves and their immediate descendants, and to spread whatever cultural values they happened to believe in at the time. That’s not wrong or anything, I’m just saying that’s what people have been mostly motivated by for most of history. Each of our ancestors either succeeded or failed at this, but that’s in the past and there’s nothing we can do about it now.
To speak about what we should do based on what our ancestors would have wanted in the past is to accept the conservative argument that values shouldn’t change just because people fought hard in the past to keep them. What matters going forward is the people now and in the future, because that’s what we have influence over.
I’d don’t know what your values are, but under my values, I disagree hard, primarily because under my values history has shown the opposite: despite real losses and horror in the 19th and 20th centuries, the overall trend is my values are being satisfied more and more. Democracies, arguably one of the key developments, aligned states to their citizenry far more than any government in history, and the results have been imperfectly good for my values since they spread.
Or in the words of Martin Luther King misquoted: “In the asymptotic limit of technology, the arc of the universe bends towards justice.”
What matters going forward is the people now and in the future, because that’s what we have influence over.
Right. And I think I want an AGI system that acts in a bounded way, with airtight theoretically correct boundaries I set, to reduce the misery me and my fellow humans suffer.
Starting with typing software source code by hand and later I’d like some AI help with factory labor and later still some help researching biology to look for the fountain of youth with some real tools.
This is a perfectly reasonable thing to do and technical alignment where each increasingly capable AI system is heavily constrained in what it’s allowed to do and how it uses it’s computational resources follows naturally.
An AI system that is sparse and simple happens to be cheaper to run and easier to debug. This also happens to reduce it’s ability to plot against us.
We should do that, and people against us...well...
Obviously we need deployable security drones. On isolated networks using narrow AIs onboard for their targeting and motion control.