I’m not denying it could happen. Give a few specific humans unlimited power—a controllable AGI—and this could be the outcome.
I’m not seeing where this devolves do “lay down and accept extinction (and personal death)”
Think of all the humans before you who made it possible for you to exist. The human tribes who managed to escape the last ice age and just barely keep the population viable. The ones who developed the scientific method and the steam engine and created the industrial revolution rather than 1000 more years of near stasis. The humans in the cold war who kept their fingers off the nuclear launch triggers even when things got tense.
And now you’re saying after all that “ok well I’m fine with death for our whole species, let’s not waste any time on preventing it”.
The reason my position “devolves into” accepting extinction is because horrific suffering following singularity seems nearly inevitable. Every society which has yet existed has had horrific problems, and every one of them would be made almost unimaginably worse if they had access to value lock-in or mind uploading. I don’t see any reason to believe that our society today, or whatever it might be in 15-50 years or however long your AGI timeline is, should be the one exception? The problem is far more broad than just a few specific humans: if only a few people held evil values(or values accepting of evil, which is basically the same given absolute power) at any given time, it would be easy for the rest of society to prevent them from doing harm. You say “maybe we can’t (save our species from extinction) but we have to try.” But my argument isn’t that we can’t, it’s that we maybe can, and the scenarios where we do are worse. My problem with shooting for AI alignment isn’t that it’s “wasting time” or that it’s too hard, it’s that shooting for a utopia is far more likely to lead to a dystopia.
I don’t think my position of accepting extinction is as defeatist or nihilistic as it seems at first glance. At least, not more so than the default “normie” position might be. Every person who isn’t born right before immortality tech needs to accept death, and every species that doesn’t achieve singularity needs to accept extinction.
The way you speak about our ancestors suggests a strange way of thinking about them and their motivations. You speak about past societies, including tribes who managed to escape the ice age, as though they were all motivated by a desire to attain some ultimate end-state of humanity, and that if we don’t shoot for that, we’d be betraying the wishes of everybody who worked so hard to get us here. But those tribesmen who survived the ice age weren’t thinking about the glorious technological future, or conquering the universe, or the fate of humanity tens of thousands of years down the line. They wanted to survive, and to improve life for themselves and their immediate descendants, and to spread whatever cultural values they happened to believe in at the time. That’s not wrong or anything, I’m just saying that’s what people have been mostly motivated by for most of history. Each of our ancestors either succeeded or failed at this, but that’s in the past and there’s nothing we can do about it now.
To speak about what we should do based on what our ancestors would have wanted in the past is to accept the conservative argument that values shouldn’t change just because people fought hard in the past to keep them. What matters going forward is the people now and in the future, because that’s what we have influence over.
I’d don’t know what your values are, but under my values, I disagree hard, primarily because under my values history has shown the opposite: despite real losses and horror in the 19th and 20th centuries, the overall trend is my values are being satisfied more and more. Democracies, arguably one of the key developments, aligned states to their citizenry far more than any government in history, and the results have been imperfectly good for my values since they spread.
Or in the words of Martin Luther King misquoted: “In the asymptotic limit of technology, the arc of the universe bends towards justice.”
What matters going forward is the people now and in the future, because that’s what we have influence over.
Right. And I think I want an AGI system that acts in a bounded way, with airtight theoretically correct boundaries I set, to reduce the misery me and my fellow humans suffer.
Starting with typing software source code by hand and later I’d like some AI help with factory labor and later still some help researching biology to look for the fountain of youth with some real tools.
This is a perfectly reasonable thing to do and technical alignment where each increasingly capable AI system is heavily constrained in what it’s allowed to do and how it uses it’s computational resources follows naturally.
An AI system that is sparse and simple happens to be cheaper to run and easier to debug. This also happens to reduce it’s ability to plot against us.
We should do that, and people against us...well...
Obviously we need deployable security drones. On isolated networks using narrow AIs onboard for their targeting and motion control.
I’m not denying it could happen. Give a few specific humans unlimited power—a controllable AGI—and this could be the outcome.
I’m not seeing where this devolves do “lay down and accept extinction (and personal death)”
Think of all the humans before you who made it possible for you to exist. The human tribes who managed to escape the last ice age and just barely keep the population viable. The ones who developed the scientific method and the steam engine and created the industrial revolution rather than 1000 more years of near stasis. The humans in the cold war who kept their fingers off the nuclear launch triggers even when things got tense.
And now you’re saying after all that “ok well I’m fine with death for our whole species, let’s not waste any time on preventing it”.
Maybe we can’t but we have to try.
The reason my position “devolves into” accepting extinction is because horrific suffering following singularity seems nearly inevitable. Every society which has yet existed has had horrific problems, and every one of them would be made almost unimaginably worse if they had access to value lock-in or mind uploading. I don’t see any reason to believe that our society today, or whatever it might be in 15-50 years or however long your AGI timeline is, should be the one exception? The problem is far more broad than just a few specific humans: if only a few people held evil values(or values accepting of evil, which is basically the same given absolute power) at any given time, it would be easy for the rest of society to prevent them from doing harm. You say “maybe we can’t (save our species from extinction) but we have to try.” But my argument isn’t that we can’t, it’s that we maybe can, and the scenarios where we do are worse. My problem with shooting for AI alignment isn’t that it’s “wasting time” or that it’s too hard, it’s that shooting for a utopia is far more likely to lead to a dystopia.
I don’t think my position of accepting extinction is as defeatist or nihilistic as it seems at first glance. At least, not more so than the default “normie” position might be. Every person who isn’t born right before immortality tech needs to accept death, and every species that doesn’t achieve singularity needs to accept extinction.
The way you speak about our ancestors suggests a strange way of thinking about them and their motivations. You speak about past societies, including tribes who managed to escape the ice age, as though they were all motivated by a desire to attain some ultimate end-state of humanity, and that if we don’t shoot for that, we’d be betraying the wishes of everybody who worked so hard to get us here. But those tribesmen who survived the ice age weren’t thinking about the glorious technological future, or conquering the universe, or the fate of humanity tens of thousands of years down the line. They wanted to survive, and to improve life for themselves and their immediate descendants, and to spread whatever cultural values they happened to believe in at the time. That’s not wrong or anything, I’m just saying that’s what people have been mostly motivated by for most of history. Each of our ancestors either succeeded or failed at this, but that’s in the past and there’s nothing we can do about it now.
To speak about what we should do based on what our ancestors would have wanted in the past is to accept the conservative argument that values shouldn’t change just because people fought hard in the past to keep them. What matters going forward is the people now and in the future, because that’s what we have influence over.
I’d don’t know what your values are, but under my values, I disagree hard, primarily because under my values history has shown the opposite: despite real losses and horror in the 19th and 20th centuries, the overall trend is my values are being satisfied more and more. Democracies, arguably one of the key developments, aligned states to their citizenry far more than any government in history, and the results have been imperfectly good for my values since they spread.
Or in the words of Martin Luther King misquoted: “In the asymptotic limit of technology, the arc of the universe bends towards justice.”
What matters going forward is the people now and in the future, because that’s what we have influence over.
Right. And I think I want an AGI system that acts in a bounded way, with airtight theoretically correct boundaries I set, to reduce the misery me and my fellow humans suffer.
Starting with typing software source code by hand and later I’d like some AI help with factory labor and later still some help researching biology to look for the fountain of youth with some real tools.
This is a perfectly reasonable thing to do and technical alignment where each increasingly capable AI system is heavily constrained in what it’s allowed to do and how it uses it’s computational resources follows naturally.
An AI system that is sparse and simple happens to be cheaper to run and easier to debug. This also happens to reduce it’s ability to plot against us.
We should do that, and people against us...well...
Obviously we need deployable security drones. On isolated networks using narrow AIs onboard for their targeting and motion control.