As for timing I’m going to guess between one and two hundred years.
Yep, that’s the basic disagreement we have, since I expect this in 10-30 years, not 100-200 years, due to the fact that I see it as we’re almost at the point we can create such a mild superintelligence.
The speed and order of these emerging technologies we disagree on.
This is yes, our most general disagreement here on how fast things are.
Maybe it would be useful to define ‘mild superintelligence.’ This would be human baseline? Or just a really strong AGI? Also, if AI fears spread to the general public as tech improves isn’t it possible that it would take a lot longer to develop even a mild superintelligence because there would be regulations/norms in place to prevent it?
I hope your predictions are right. It could turn out that it’s relatively easy to build a ‘mild superintelligence’ but much more difficult to go all the way.
Roughly, I’m talking something like 10x a human’s intelligence, roughly, though in practice it’s likely 2-4x assuming it uses the same energy as a human brain.
But in this scenario, scaling up superintelligence is actually surprisingly easy by adding in more energy, and this would allow more intelligence at the cost of more energy.
Also, this is still a world which would have vast changes fast.
I don’t believe we will go extinct or have a catastrophe, due to my beliefs around alignment, but this would still represent a catastrophic, potentially existential threat if the AGIs/Mild ASIa wanted to.
Remember, that would allow a personal phone or device to host a mild superintelligence, that is 2-10x more intelligent than humans.
Yep, that’s the basic disagreement we have, since I expect this in 10-30 years, not 100-200 years, due to the fact that I see it as we’re almost at the point we can create such a mild superintelligence.
This is yes, our most general disagreement here on how fast things are.
Maybe it would be useful to define ‘mild superintelligence.’ This would be human baseline? Or just a really strong AGI? Also, if AI fears spread to the general public as tech improves isn’t it possible that it would take a lot longer to develop even a mild superintelligence because there would be regulations/norms in place to prevent it?
I hope your predictions are right. It could turn out that it’s relatively easy to build a ‘mild superintelligence’ but much more difficult to go all the way.
Roughly, I’m talking something like 10x a human’s intelligence, roughly, though in practice it’s likely 2-4x assuming it uses the same energy as a human brain.
But in this scenario, scaling up superintelligence is actually surprisingly easy by adding in more energy, and this would allow more intelligence at the cost of more energy.
Also, this is still a world which would have vast changes fast.
I don’t believe we will go extinct or have a catastrophe, due to my beliefs around alignment, but this would still represent a catastrophic, potentially existential threat if the AGIs/Mild ASIa wanted to.
Remember, that would allow a personal phone or device to host a mild superintelligence, that is 2-10x more intelligent than humans.
That’s a huge deal in itself!