Well, the assumptions are that everyone is using classical computers, thermodynamics exists, thus they must get energy from somewhere (albeit with superhuman skill). That’s the list of assumptions necessary to prove the intelligence Explosion scenario wrong.
I’ve already accepted that a superhumanly intelligent AI can take over the world in about several years, which is arguably somewhat too fast for the public to realize, and AI X-risk chances are in my own view oscillating towards 30-60% this century, so this is still no joking matter. AI Alignment matters still, it still needs to get more researchers and money.
Now I will be wrong if quantum/reversible computers become more practical than they are currently, and companies can reliably train AI on quantum/reversible computers.
My post is more of a retrospective, as well as saying that things take time to manifest it’s full impact.
EDIT: My point is, the Singularity as envisioned by Ray Kurzweil and John Smart has already happened, it just takes time from now to the end of the century. The end of the century will be very weird. It’s just more continuously weird, with a discontinuity unlocking new possibilities that are then claimed by continuous effort by AGI and ASI.
That’s the list of assumptions necessary to prove the intelligence Explosion scenario wrong.
No, it’s not. As I said, a skyscraper of assumptions each more dubious than the last. The entire line of reasoning from fundamental physics is useless because all you get is vacuous bounds like ‘if a kg of mass can do 5.4e50 quantum operations per second and the earth is 6e24 kg then that bounds available operations at 3e65 operations per second’ - which is completely useless because why would you constrain it to just the earth? (Not even going to bother trying to find a classical number to use as an example—they are all, to put it technically, ‘very big’.) Why are the numbers spat out by appeal to fundamental limits of reversible computation, such as but far from limited to, 3e75 ops/s, not enough to do pretty much anything compared to the status quo of systems topping out at ~1.1 exaflops or 1.1e18, 57 orders of magnitude below that one random guess? Why shouldn’t we say “there’s plenty of room at the top”? Even if there wasn’t and you could ‘only’ go another 20 orders of magnitude, so what? what, exactly, would it be unable to do that it would if you subtracted or added 10 orders of magnitude* and how do you know that? why would this not decisively change economics, technology, politics, recursive AI scaling research, and everything else? if you argue that this means it can’t do something in seconds and would instead take hours, how is that not an ‘intelligence explosion’ in the Vingean sense of being an asymptote and happening far faster than prior human transitions taking millennia or centuries, and being a singularity past which humans cannot see nor plan? Is it not an intelligence explosion but an ‘intelligence gust of warm wind’ if it takes a week instead of a day? Should we talk about the intelligence sirocco instead? This is why I say the most reliable part of your ‘proof’ are also the least important, which is the opposite of what you need, and serves only to dazzle and ‘Eulerize’ the innumerate.
* btw I lied; that multiplies to 3e75, not 3e65. Did you notice?
I was talking about irreversible classical computers, where Landauer bounds it much much harshly, not quantum computers relying on the much looser Margolus-Levitin limit.
To put this in perspective, there’s a 35 order of magnitude or so difference between what a quantum computer’s limit is and what a classical computer’s limit is.
While the most pessimistic conclusions are probably wrong (I think they will accept the energy cost of 300 watts-5 kilowatts to increase computation by one or 2 orders of magnitude, since intelligence is very valuable and such a thing would lead to a Vernor Vinge-like singularity) this is a nice post for describing the difference.
So it actually supports my post here, since I talked about how quantum/reversible computers would favor the intelligence Explosion story. I am pessimistic on quantum/reversible computers being practical before 2100, which is why the accelerating change story is favored.
So you and me actually agree here, so are we getting confused about something.
And black hole computers are possible, but realistically will be developed post-singularity.
I think from an x-risk perspective the relevant threshold is: when AI no longer needs human researchers to improve itself. Currently there is no (publicly known) model which can improve itself fully automatically. The question we need to ask is, when will this thing get out of our control? Today, it still needs us.
Well, the assumptions are that everyone is using classical computers, thermodynamics exists, thus they must get energy from somewhere (albeit with superhuman skill). That’s the list of assumptions necessary to prove the intelligence Explosion scenario wrong.
I’ve already accepted that a superhumanly intelligent AI can take over the world in about several years, which is arguably somewhat too fast for the public to realize, and AI X-risk chances are in my own view oscillating towards 30-60% this century, so this is still no joking matter. AI Alignment matters still, it still needs to get more researchers and money.
Now I will be wrong if quantum/reversible computers become more practical than they are currently, and companies can reliably train AI on quantum/reversible computers.
My post is more of a retrospective, as well as saying that things take time to manifest it’s full impact.
EDIT: My point is, the Singularity as envisioned by Ray Kurzweil and John Smart has already happened, it just takes time from now to the end of the century. The end of the century will be very weird. It’s just more continuously weird, with a discontinuity unlocking new possibilities that are then claimed by continuous effort by AGI and ASI.
No, it’s not. As I said, a skyscraper of assumptions each more dubious than the last. The entire line of reasoning from fundamental physics is useless because all you get is vacuous bounds like ‘if a kg of mass can do 5.4e50 quantum operations per second and the earth is 6e24 kg then that bounds available operations at 3e65 operations per second’ - which is completely useless because why would you constrain it to just the earth? (Not even going to bother trying to find a classical number to use as an example—they are all, to put it technically, ‘very big’.) Why are the numbers spat out by appeal to fundamental limits of reversible computation, such as but far from limited to, 3e75 ops/s, not enough to do pretty much anything compared to the status quo of systems topping out at ~1.1 exaflops or 1.1e18, 57 orders of magnitude below that one random guess? Why shouldn’t we say “there’s plenty of room at the top”? Even if there wasn’t and you could ‘only’ go another 20 orders of magnitude, so what? what, exactly, would it be unable to do that it would if you subtracted or added 10 orders of magnitude* and how do you know that? why would this not decisively change economics, technology, politics, recursive AI scaling research, and everything else? if you argue that this means it can’t do something in seconds and would instead take hours, how is that not an ‘intelligence explosion’ in the Vingean sense of being an asymptote and happening far faster than prior human transitions taking millennia or centuries, and being a singularity past which humans cannot see nor plan? Is it not an intelligence explosion but an ‘intelligence gust of warm wind’ if it takes a week instead of a day? Should we talk about the intelligence sirocco instead? This is why I say the most reliable part of your ‘proof’ are also the least important, which is the opposite of what you need, and serves only to dazzle and ‘Eulerize’ the innumerate.
* btw I lied; that multiplies to 3e75, not 3e65. Did you notice?
I was talking about irreversible classical computers, where Landauer bounds it much much harshly, not quantum computers relying on the much looser Margolus-Levitin limit.
To put this in perspective, there’s a 35 order of magnitude or so difference between what a quantum computer’s limit is and what a classical computer’s limit is.
Here’s a link:
https://www.lesswrong.com/posts/xwBuoE9p8GE7RAuhd/brain-efficiency-much-more-than-you-wanted-to-know
While the most pessimistic conclusions are probably wrong (I think they will accept the energy cost of 300 watts-5 kilowatts to increase computation by one or 2 orders of magnitude, since intelligence is very valuable and such a thing would lead to a Vernor Vinge-like singularity) this is a nice post for describing the difference.
So it actually supports my post here, since I talked about how quantum/reversible computers would favor the intelligence Explosion story. I am pessimistic on quantum/reversible computers being practical before 2100, which is why the accelerating change story is favored.
So you and me actually agree here, so are we getting confused about something.
And black hole computers are possible, but realistically will be developed post-singularity.
I think from an x-risk perspective the relevant threshold is: when AI no longer needs human researchers to improve itself. Currently there is no (publicly known) model which can improve itself fully automatically. The question we need to ask is, when will this thing get out of our control? Today, it still needs us.