I am not sure what the silent majority belief on this site is (by people not Karma)? Is Yud’s worldview basically right or wrong?
I think this will depend strongly on where you draw the line on “basically”. I think the majority probably thinks:
AI is likely to be a really big deal
Existential risk from AI is at least substantial (e.g. >5%)
AI takeoff is reasonably likely to happen quite quickly in wall clock time if this isn’t actively prevented (e.g. AI will cause there to be <10 years from a 20% annualized GDP growth rate to a 100x annualized growth rate)
The power of full technological maturity is extremely high (e.g. nanotech, highly efficient computing, etc.)
But, I expect that the majority of people don’t think:
Inside view, existential risk is >95%
A century of dedicated research on alignment (targeted as well as society would realistically do) is insufficient to get risk <15%.
Yes to AI being a big deal and extremely powerful ( yes I doubt anyone would be here otherwise)
Yes—Don’t think anyone can reasonably claim its <5% but then so is not having AI if x-risk is defined to be humanity missing practically all of its Cosmic endowment.
Maybe—Even with slow takeoff, and hardware constrained you get much greater GDP, though I don’t agree with 100x (for the critical period that is, 100x could happen later). E.g. car factories are made to produce robots, we get 1-10 billion more minds and bodies per year, but not quite 100X. ~10x per year is enough to be extremely disruptive and x-risk anyway.
---
(1)
Yes I don’t think x-risk is >95% - say 20% as a very rough guess that humanity misses all its Cosmic endowment. I think AI x-risk needs to be put in this context—say you ask someone
“What’s the chance that humanity becomes successfully interstellar?”
If they say 50⁄50 then being OK with any AI x-risk less than 50% is quite defensible if getting AI right means that its practically certain you get your cosmic endowment etc.
---
(2)
I do think its defensible that a century of dedicated research on alignment doesn’t get risk <15% but because alignment research is only useful a little bit in advance of capabilities—say we had a 100 year pause, then I wouldn’t have confidence in our alignment plan at the end of it.
Anyway regarding x-risk I don’t think there is a completely safe path. Too fast with AI and obvious risk, too slow and there is also other obvious risks. Our current situation is likely unstable. For example the famous quote
“If you want a picture of the future, imagine a boot stamping on a human face— forever.”
I believe that is now possible with current tech, where it was not say for Soviet Russia. So we may be in the situation where societies can go 1984 totalitarian bad, but not come back because our tech coordination skills are sufficient to stop centralized empires from collapsing. LLM of course make censorship even easier. (I am sure there are other ways our current tech could destroy most societies also)
If that’s the case, a long pause could result in all power being in such societies which when the pause ended would be very likely to screw up alignment.
That makes me unsure what regulation to advocate for, though I am in favor of slowing down hardware AI progress but fully exploring the capabilities of our current HW.
Most importantly I think we should hugely speed up Neuralink type devices and brain uploading. I would identify much more with an uploaded human that was then carefully, appropriately upgraded to superintelligence than an alternative path where a pure AI superintelligence was made.
We have to accept that we live in critical times and just slowing things down is not necessarily the safest option.
I think this will depend strongly on where you draw the line on “basically”. I think the majority probably thinks:
AI is likely to be a really big deal
Existential risk from AI is at least substantial (e.g. >5%)
AI takeoff is reasonably likely to happen quite quickly in wall clock time if this isn’t actively prevented (e.g. AI will cause there to be <10 years from a 20% annualized GDP growth rate to a 100x annualized growth rate)
The power of full technological maturity is extremely high (e.g. nanotech, highly efficient computing, etc.)
But, I expect that the majority of people don’t think:
Inside view, existential risk is >95%
A century of dedicated research on alignment (targeted as well as society would realistically do) is insufficient to get risk <15%.
Which I think are both beliefs Yudkowsky has.
For me -
Yes to AI being a big deal and extremely powerful ( yes I doubt anyone would be here otherwise)
Yes—Don’t think anyone can reasonably claim its <5% but then so is not having AI if x-risk is defined to be humanity missing practically all of its Cosmic endowment.
Maybe—Even with slow takeoff, and hardware constrained you get much greater GDP, though I don’t agree with 100x (for the critical period that is, 100x could happen later). E.g. car factories are made to produce robots, we get 1-10 billion more minds and bodies per year, but not quite 100X. ~10x per year is enough to be extremely disruptive and x-risk anyway.
---
(1)
Yes I don’t think x-risk is >95% - say 20% as a very rough guess that humanity misses all its Cosmic endowment. I think AI x-risk needs to be put in this context—say you ask someone
“What’s the chance that humanity becomes successfully interstellar?”
If they say 50⁄50 then being OK with any AI x-risk less than 50% is quite defensible if getting AI right means that its practically certain you get your cosmic endowment etc.
---
(2)
I do think its defensible that a century of dedicated research on alignment doesn’t get risk <15% but because alignment research is only useful a little bit in advance of capabilities—say we had a 100 year pause, then I wouldn’t have confidence in our alignment plan at the end of it.
Anyway regarding x-risk I don’t think there is a completely safe path. Too fast with AI and obvious risk, too slow and there is also other obvious risks. Our current situation is likely unstable. For example the famous quote
“If you want a picture of the future, imagine a boot stamping on a human face— forever.”
I believe that is now possible with current tech, where it was not say for Soviet Russia. So we may be in the situation where societies can go 1984 totalitarian bad, but not come back because our tech coordination skills are sufficient to stop centralized empires from collapsing. LLM of course make censorship even easier. (I am sure there are other ways our current tech could destroy most societies also)
If that’s the case, a long pause could result in all power being in such societies which when the pause ended would be very likely to screw up alignment.
That makes me unsure what regulation to advocate for, though I am in favor of slowing down hardware AI progress but fully exploring the capabilities of our current HW.
Most importantly I think we should hugely speed up Neuralink type devices and brain uploading. I would identify much more with an uploaded human that was then carefully, appropriately upgraded to superintelligence than an alternative path where a pure AI superintelligence was made.
We have to accept that we live in critical times and just slowing things down is not necessarily the safest option.