I’m sure I don’t fully understand what you mean by ‘integrating it into new AI models’ but in any case it seems we disagree on forecasting the level and degree of advancement. I think models and integration will only be as good as the physical hardware it runs on which, to me, is the biggest bottleneck. It doesn’t seem practical that our current chips and circuit boards can house a superintelligence regardless of scale and modularity. So in 10 years I think we’ll have a lot of strong AGIs that are able to do a lot of useful and interesting work and we’ll probably need new subcategories to usefully describe them and tell them apart (I’m just spitballing on this point).
However, true AI (or superintelligence) that can cognitively outperform all of humanity and can self improve will take longer and run on hardware that would be alien to us today. That’s not to say that AGI won’t be disruptive or dangerous, just not world ending levels of dangerous. You could say that the endgame for AGI is the opening game of true AI.
It doesn’t seem practical that our current chips and circuit boards can house a superintelligence regardless of scale and modularity.
This is my disagreement point. I think that we will be able to build chips that can house a mild superintelligence, solely out of the energy used by the human brain, assuming the most energy efficient chips are used.
And if we allow ourselves to crank the energy up, then it’s pretty obviously achievable even using current chips.
And I think this is plenty dangerous, even an existential danger, even without exotic architectures due to copying and coordination.
So this:
However, true AI (or superintelligence) that can cognitively outperform all of humanity and can self improve will take longer and run on hardware that would be alien to us today.
Is not necessary for AGI to happen, or to be a huge deal.
Don’t get me wrong, exotic stuff exists for hardware, which if made practical, would be
a even huger deal, because at the far end of exotic hardware like quantum computers, such a computer could simulate a brain basically perfectly accurately, and do all sorts of very big deals, but that isn’t necessary in order for things to be a wild ride due to AGI this century.
I think that we will be able to build chips that can house a mild superintelligence, solely out of the energy used by the human brain, assuming the most energy efficient chips are used.
I agree with this statement, just not any time soon since hardware advancement is relatively slow. I also agree that this century will be a wild ride due to AGI and I imagine that AGI will play an important role in developing the exotic hardware and/or architecture that leads to superintelligence.
The speed and order of these emerging technologies we disagree on. I think we’ll have powerful AGIs this decade and they’ll have a huge impact but will still be quite narrow compared to a true superintelligence. My prediction is that superintelligence will emerge from iteration and development over time and it will run on exotic hardware probably supplemented by AGI. My prediction is mostly informed by physical constraints and current rates of development.
As for timing I’m going to guess between one and two hundred years. (I wouldn’t be surprised if we have the technology to augment human intelligence before then but the implications of that are not obvious. If advanced enough maybe it leads to a world similar to some scifi stories where only analog tech is used and complex work or calculation is done by augmented humans.)
As for timing I’m going to guess between one and two hundred years.
Yep, that’s the basic disagreement we have, since I expect this in 10-30 years, not 100-200 years, due to the fact that I see it as we’re almost at the point we can create such a mild superintelligence.
The speed and order of these emerging technologies we disagree on.
This is yes, our most general disagreement here on how fast things are.
Maybe it would be useful to define ‘mild superintelligence.’ This would be human baseline? Or just a really strong AGI? Also, if AI fears spread to the general public as tech improves isn’t it possible that it would take a lot longer to develop even a mild superintelligence because there would be regulations/norms in place to prevent it?
I hope your predictions are right. It could turn out that it’s relatively easy to build a ‘mild superintelligence’ but much more difficult to go all the way.
Roughly, I’m talking something like 10x a human’s intelligence, roughly, though in practice it’s likely 2-4x assuming it uses the same energy as a human brain.
But in this scenario, scaling up superintelligence is actually surprisingly easy by adding in more energy, and this would allow more intelligence at the cost of more energy.
Also, this is still a world which would have vast changes fast.
I don’t believe we will go extinct or have a catastrophe, due to my beliefs around alignment, but this would still represent a catastrophic, potentially existential threat if the AGIs/Mild ASIa wanted to.
Remember, that would allow a personal phone or device to host a mild superintelligence, that is 2-10x more intelligent than humans.
I’m sure I don’t fully understand what you mean by ‘integrating it into new AI models’ but in any case it seems we disagree on forecasting the level and degree of advancement. I think models and integration will only be as good as the physical hardware it runs on which, to me, is the biggest bottleneck. It doesn’t seem practical that our current chips and circuit boards can house a superintelligence regardless of scale and modularity. So in 10 years I think we’ll have a lot of strong AGIs that are able to do a lot of useful and interesting work and we’ll probably need new subcategories to usefully describe them and tell them apart (I’m just spitballing on this point).
However, true AI (or superintelligence) that can cognitively outperform all of humanity and can self improve will take longer and run on hardware that would be alien to us today. That’s not to say that AGI won’t be disruptive or dangerous, just not world ending levels of dangerous. You could say that the endgame for AGI is the opening game of true AI.
This is my disagreement point. I think that we will be able to build chips that can house a mild superintelligence, solely out of the energy used by the human brain, assuming the most energy efficient chips are used.
And if we allow ourselves to crank the energy up, then it’s pretty obviously achievable even using current chips.
And I think this is plenty dangerous, even an existential danger, even without exotic architectures due to copying and coordination.
So this:
Is not necessary for AGI to happen, or to be a huge deal.
Don’t get me wrong, exotic stuff exists for hardware, which if made practical, would be a even huger deal, because at the far end of exotic hardware like quantum computers, such a computer could simulate a brain basically perfectly accurately, and do all sorts of very big deals, but that isn’t necessary in order for things to be a wild ride due to AGI this century.
I agree with this statement, just not any time soon since hardware advancement is relatively slow. I also agree that this century will be a wild ride due to AGI and I imagine that AGI will play an important role in developing the exotic hardware and/or architecture that leads to superintelligence.
The speed and order of these emerging technologies we disagree on. I think we’ll have powerful AGIs this decade and they’ll have a huge impact but will still be quite narrow compared to a true superintelligence. My prediction is that superintelligence will emerge from iteration and development over time and it will run on exotic hardware probably supplemented by AGI. My prediction is mostly informed by physical constraints and current rates of development.
As for timing I’m going to guess between one and two hundred years. (I wouldn’t be surprised if we have the technology to augment human intelligence before then but the implications of that are not obvious. If advanced enough maybe it leads to a world similar to some scifi stories where only analog tech is used and complex work or calculation is done by augmented humans.)
Yep, that’s the basic disagreement we have, since I expect this in 10-30 years, not 100-200 years, due to the fact that I see it as we’re almost at the point we can create such a mild superintelligence.
This is yes, our most general disagreement here on how fast things are.
Maybe it would be useful to define ‘mild superintelligence.’ This would be human baseline? Or just a really strong AGI? Also, if AI fears spread to the general public as tech improves isn’t it possible that it would take a lot longer to develop even a mild superintelligence because there would be regulations/norms in place to prevent it?
I hope your predictions are right. It could turn out that it’s relatively easy to build a ‘mild superintelligence’ but much more difficult to go all the way.
Roughly, I’m talking something like 10x a human’s intelligence, roughly, though in practice it’s likely 2-4x assuming it uses the same energy as a human brain.
But in this scenario, scaling up superintelligence is actually surprisingly easy by adding in more energy, and this would allow more intelligence at the cost of more energy.
Also, this is still a world which would have vast changes fast.
I don’t believe we will go extinct or have a catastrophe, due to my beliefs around alignment, but this would still represent a catastrophic, potentially existential threat if the AGIs/Mild ASIa wanted to.
Remember, that would allow a personal phone or device to host a mild superintelligence, that is 2-10x more intelligent than humans.
That’s a huge deal in itself!