You buck the herd by saying their obsession with AI safety is preventing them from participating in the complete transformation of civilization.
I buck the herd by saying that the whole singulatarian complex is a chimera that has almost nothing to do with how reality will actually play out and its existence as a memeplex is explained primarily by sociological factors rather than having much to do with actual science and technology and history.
Oh, well I mostly agree with you there. Really ending aging will have a transformative effect on society, but the invention of AI is not going to radically alter power structures in the way that singulatarians imagine.
Really ending aging will have a transformative effect on society
“The medical revolution that began with the beginning of the twentieth century had warped all human society for five hundred years. America had adjusted to Eli Whitney’s cotton gin in less than half that time. As with the gin, the effects would never quite die out. But already society was swinging back to what had once been normal. Slowly; but there was motion. In Brazil a small but growing, alliance agitated for the removal of the death penalty for habitual traffic offenders. They would be opposed, but they would win.”
Well there are some serious ramifications that are without historical precedent. For example, without menopause it may perhaps become the norm for women to wait until retirement to have kids. It may in fact be the case that couples will work for 40 years, have a 25-30 year retirement where they raise a cohort of children, and then re-enter the work force for a new career. Certainly families are going to start representing smaller and smaller percentages of the population as birth rates decline while people get older and older without dying. The social ramifications alone will be huge, which was more along the lines of what I was talking about.
This just seems stupid to me. Ending aging is fundamentally SLOW change. In 100 or 200 or 300 years from now, as more and more people gain access to anti-aging (since it will start off very expensive), we can worry about that. But conscious AI will be a force in the world in under 50 years. And it doesn’t even have to be SUPER intelligent to cause insane amounts of social upheaval. Duplicability means that even 1 human level AI can be world-wide or mass produced in a very short time!
Can you link to a longer analysis of yours regarding this?
I simply feel overwhelmed when people discuss AI. To me intelligence is a deeply anthropomorphic category, includes subcategories like having a good sense of humor. Reducing it to optimization, without even sentience or conversational ability with self-consciousness… my brain throws out the stop sign already at this point and it is not even AI, it is the pre-studies of human intelligence that already dehumanize, deanthromorphize the idea of intelligence and make it sound more like a simple and brute-force algorithm. Like Solomonoff Induction, another thing that my brain completely freezes over: how can you have truth and clever solutions without even really thinking, just throwing a huge number of random ideas in and seeing what survives testing? Would it all be so quantitative? Can you reduce the wonderful qualities of the human mind to quantities?
Nobody’s saying AI will be human without humor, joy, etc. The point is AI will be dangerous, because it’ll have those aspects of intelligence that make us powerful, without those that make us nice. Like, that’s basically the point of worrying about UFAI.
Computers already can outperform you in a wide variety of tasks. Moreover, today, with the rise of machine learning, we can train computers to do pretty high-level things, like object recognition or senitment analysis (and sometimes outperform humans in these tasks). Isn’t it power?
As for Solomonoff induction… What do you think your brain is doing when you are thinking? Some kind of optimized search in hypotheses space, so you consider only a very very small set of hypotheses (compared to the entire space), hopefully good enough ones. While Solomonoff induction checks all of them, every single hypothesis, and finds the best.
Solomonoff induction is so much thinking that it is incomputable.
Since we don’t have that much raw computing power (and never will have), the hypotheses search must be heavily optimized. Throwing off unpromising directions of search. Searching in regions with high probability of success. Using prior knowledge to narrow search. That’s what your brain is doing, and that’s what machines will do. That’s not like “simple and brute-force”, because simple and brute-force algorithms are either impractically slow, or incomputable at all.
Computers already can outperform you in a wide variety of tasks.
Eagles, too: they can fly and I not. The question is whether the currently foreseeable computerizable tasks are closer to flying or to intelligence. Which in turn depends on how high and how “magic” we see intelligence.
As for Solomonoff induction… What do you think your brain is doing when you are thinking?
Ugh, using Aristotelean logic? So it is not random hypotheses but causality and logic based.
Solomonoff induction is so much thinking that it is incomputable.
I think using your terminology, thinking is not the searching, it is the findinging logical relationships so not a lot of space must be searched.
That’s not like “simple and brute-force”, because simple and brute-force algorithms are either impractically slow, or incomputable at all.
OK, that makes sense. Perhaps we can agree that logic and casuality and actual reasoning is all about narrowing the hypothesis space to search. This is intelligence, not the search.
I’m starting to suspect that we’re arguing on definitions. By search I mean the entire algorithm of finding the best hypothesis; both random hypothesis checking and Aristotelian logic (and any combination of these methods) fit. What do you mean?
Narrowing the hypothesis space is search. Once you narrowed the hypotheses space to a single point, you have found an answer.
As for eagles: if we build a drone that can fly as well as an eagle can, I’d say that the drone has an eagle-level flying ability; if a computer can solve all intellectual tasks that a human can solve, I’d say that the computer has a human-level intelligence.
Yes. Absolutely. When that happens inside a human being’s head, we generally call them ‘mass murderers’. Even I only cooperate with society because there is a net long term gain in doing so; if that were no longer the case, I honestly don’t know what I would do. Awesome, that’s something new to think about. Thanks.
That’s probably irrelevant, because mass murderers don’t have power without all the rest. They are likely to have sentience and conversational ability with self-consciousness, at least.
Not sure. Suspect nobody knows, but seems possible?
I think the most instructive post on this is actually Three Worlds Collide, for making a strong case for the arbitrary nature of our own “universal” values.
Interesting, I seem to buck the herd in nearly exactly the opposite manner as you.
Meaning?
You buck the herd by saying their obsession with AI safety is preventing them from participating in the complete transformation of civilization.
I buck the herd by saying that the whole singulatarian complex is a chimera that has almost nothing to do with how reality will actually play out and its existence as a memeplex is explained primarily by sociological factors rather than having much to do with actual science and technology and history.
Oh, well I mostly agree with you there. Really ending aging will have a transformative effect on society, but the invention of AI is not going to radically alter power structures in the way that singulatarians imagine.
See, I include the whole ‘immanent radical life extension’ and ‘Drexlerian molecular manufacturing’ idea sets in the singulatarian complex...
The craziest person in the world can still believe the sky is blue.
Ah, but in this case as near as i can tell it is actually orange.
“The medical revolution that began with the beginning of the twentieth century had warped all human society for five hundred years. America had adjusted to Eli Whitney’s cotton gin in less than half that time. As with the gin, the effects would never quite die out. But already society was swinging back to what had once been normal. Slowly; but there was motion. In Brazil a small but growing, alliance agitated for the removal of the death penalty for habitual traffic offenders. They would be opposed, but they would win.”
Larry Niven: The Gift From Earth
Well there are some serious ramifications that are without historical precedent. For example, without menopause it may perhaps become the norm for women to wait until retirement to have kids. It may in fact be the case that couples will work for 40 years, have a 25-30 year retirement where they raise a cohort of children, and then re-enter the work force for a new career. Certainly families are going to start representing smaller and smaller percentages of the population as birth rates decline while people get older and older without dying. The social ramifications alone will be huge, which was more along the lines of what I was talking about.
This just seems stupid to me. Ending aging is fundamentally SLOW change. In 100 or 200 or 300 years from now, as more and more people gain access to anti-aging (since it will start off very expensive), we can worry about that. But conscious AI will be a force in the world in under 50 years. And it doesn’t even have to be SUPER intelligent to cause insane amounts of social upheaval. Duplicability means that even 1 human level AI can be world-wide or mass produced in a very short time!
“Will”? You guarantee that?
Can you link to a longer analysis of yours regarding this?
I simply feel overwhelmed when people discuss AI. To me intelligence is a deeply anthropomorphic category, includes subcategories like having a good sense of humor. Reducing it to optimization, without even sentience or conversational ability with self-consciousness… my brain throws out the stop sign already at this point and it is not even AI, it is the pre-studies of human intelligence that already dehumanize, deanthromorphize the idea of intelligence and make it sound more like a simple and brute-force algorithm. Like Solomonoff Induction, another thing that my brain completely freezes over: how can you have truth and clever solutions without even really thinking, just throwing a huge number of random ideas in and seeing what survives testing? Would it all be so quantitative? Can you reduce the wonderful qualities of the human mind to quantities?
Intelligence to what purpose?
Nobody’s saying AI will be human without humor, joy, etc. The point is AI will be dangerous, because it’ll have those aspects of intelligence that make us powerful, without those that make us nice. Like, that’s basically the point of worrying about UFAI.
But is it possible to have power without all the rest?
Certainly. Why not?
Computers already can outperform you in a wide variety of tasks. Moreover, today, with the rise of machine learning, we can train computers to do pretty high-level things, like object recognition or senitment analysis (and sometimes outperform humans in these tasks). Isn’t it power?
As for Solomonoff induction… What do you think your brain is doing when you are thinking? Some kind of optimized search in hypotheses space, so you consider only a very very small set of hypotheses (compared to the entire space), hopefully good enough ones. While Solomonoff induction checks all of them, every single hypothesis, and finds the best.
Solomonoff induction is so much thinking that it is incomputable.
Since we don’t have that much raw computing power (and never will have), the hypotheses search must be heavily optimized. Throwing off unpromising directions of search. Searching in regions with high probability of success. Using prior knowledge to narrow search. That’s what your brain is doing, and that’s what machines will do. That’s not like “simple and brute-force”, because simple and brute-force algorithms are either impractically slow, or incomputable at all.
Eagles, too: they can fly and I not. The question is whether the currently foreseeable computerizable tasks are closer to flying or to intelligence. Which in turn depends on how high and how “magic” we see intelligence.
Ugh, using Aristotelean logic? So it is not random hypotheses but causality and logic based.
I think using your terminology, thinking is not the searching, it is the findinging logical relationships so not a lot of space must be searched.
OK, that makes sense. Perhaps we can agree that logic and casuality and actual reasoning is all about narrowing the hypothesis space to search. This is intelligence, not the search.
I’m starting to suspect that we’re arguing on definitions. By search I mean the entire algorithm of finding the best hypothesis; both random hypothesis checking and Aristotelian logic (and any combination of these methods) fit. What do you mean?
Narrowing the hypothesis space is search. Once you narrowed the hypotheses space to a single point, you have found an answer.
As for eagles: if we build a drone that can fly as well as an eagle can, I’d say that the drone has an eagle-level flying ability; if a computer can solve all intellectual tasks that a human can solve, I’d say that the computer has a human-level intelligence.
Yes. Absolutely. When that happens inside a human being’s head, we generally call them ‘mass murderers’. Even I only cooperate with society because there is a net long term gain in doing so; if that were no longer the case, I honestly don’t know what I would do. Awesome, that’s something new to think about. Thanks.
That’s probably irrelevant, because mass murderers don’t have power without all the rest. They are likely to have sentience and conversational ability with self-consciousness, at least.
Not sure. Suspect nobody knows, but seems possible?
I think the most instructive post on this is actually Three Worlds Collide, for making a strong case for the arbitrary nature of our own “universal” values.