While I may or may not agree with your more fantastical conclusions, I don’t understand the downvotes. The analogy between biological, neural, and AI systems is not new but is well presented. I particularly enjoyed the analogy that comptronium is “habitable space” to AI. Minus physics-as-we-know-it breaking steps, which are polemic and not crucial to the argument’s point, I’d call on downvoters to be explicit about what they disagree with or find unhelpful.
Speculatively, perhaps at least some find the presentation of AI as the “next stage of evolution” infohazardous. I’d disagree. I think it should start discussion along the lines of “what we mean by alignment”. What’s the end state for a human society with “aligned” AI? It probably looks pretty alien to our present society. It probably tends towards deep machine mediated communication blurring the lines between individuals. I think it’s valuable to envision these futures.
I appreciate your insightful post. We seem similar in our thinking up to a point. Where we diverge is that I am not prejudicial about what form intelligence takes. I care that it is conscious, insofar as we can test for such a thing. I care that it lacks none of our capacities, so that what we offer the universe does not perish along with us. But I do not care that it be humans, specifically, and feel there are carriers of intelligence far more suited to the vacuum of space than we are, or even cyborgs. Does the notion of being superceded disturb you?
Yes, the notion of being superceded does disturb me. Not in principle, but pragmatically. I read your point, broadly, to be that there are a lot of interesting potential non-depressing outcomes to AI, up to advocating for a level of comfort with the idea of getting replaced by something “better” and bigger than ourselves. I generally agree with this! However, I’m less sanguine than you that AI will “replicate” to evolve consciousness that leads to one of these non-depressing outcomes. There’s no guarantee we get to be subsumed, cyborged, or even superceded. The default outcome is that we get erased by an unconscious machine that tiles the universe with smiley faces and keeps that as its value function until heat death. Or it’s at least a very plausible outcome we need to react to. So caring about the points you noted you care about, in my view, translates to caring about alignment and control.
Fair point. But then, our most distant ancestor was a mindless maximizer of sorts with the only value function of making copies of itself. It did indeed saturate the oceans with those copies. But the story didn’t end there, or there would be nobody to write this.
While I may or may not agree with your more fantastical conclusions, I don’t understand the downvotes. The analogy between biological, neural, and AI systems is not new but is well presented. I particularly enjoyed the analogy that comptronium is “habitable space” to AI. Minus physics-as-we-know-it breaking steps, which are polemic and not crucial to the argument’s point, I’d call on downvoters to be explicit about what they disagree with or find unhelpful.
Speculatively, perhaps at least some find the presentation of AI as the “next stage of evolution” infohazardous. I’d disagree. I think it should start discussion along the lines of “what we mean by alignment”. What’s the end state for a human society with “aligned” AI? It probably looks pretty alien to our present society. It probably tends towards deep machine mediated communication blurring the lines between individuals. I think it’s valuable to envision these futures.
I appreciate your insightful post. We seem similar in our thinking up to a point. Where we diverge is that I am not prejudicial about what form intelligence takes. I care that it is conscious, insofar as we can test for such a thing. I care that it lacks none of our capacities, so that what we offer the universe does not perish along with us. But I do not care that it be humans, specifically, and feel there are carriers of intelligence far more suited to the vacuum of space than we are, or even cyborgs. Does the notion of being superceded disturb you?
Yes, the notion of being superceded does disturb me. Not in principle, but pragmatically. I read your point, broadly, to be that there are a lot of interesting potential non-depressing outcomes to AI, up to advocating for a level of comfort with the idea of getting replaced by something “better” and bigger than ourselves. I generally agree with this! However, I’m less sanguine than you that AI will “replicate” to evolve consciousness that leads to one of these non-depressing outcomes. There’s no guarantee we get to be subsumed, cyborged, or even superceded. The default outcome is that we get erased by an unconscious machine that tiles the universe with smiley faces and keeps that as its value function until heat death. Or it’s at least a very plausible outcome we need to react to. So caring about the points you noted you care about, in my view, translates to caring about alignment and control.
Fair point. But then, our most distant ancestor was a mindless maximizer of sorts with the only value function of making copies of itself. It did indeed saturate the oceans with those copies. But the story didn’t end there, or there would be nobody to write this.