To be clear, the sole reason I assumed (initial) alignment in this post is because if there is an unaligned ASI then we probably all die for reasons that don’t require SNC (though SNC might have a role in the specifics of how the really bad outcome plays out). So “aligned” here basically means: powerful enough to be called an ASI and won’t kill everyone if SNC is false (and not controlled/misused by bad actors, etc.)
> And the artificiality itself is the problem.
This sounds like a pretty central point that I did not explore very much except for some intuitive statements at the end (the bulk of the post summarizing the “fundamental limits of control” argument), I’d be interested in hearing more about this. I think I get (and hopefully roughly conveyed) the idea that AI has different needs from its environment than humans, so if it optimizes the environment in service of those needs we die...but I get the sense that there is something deeper intended here.
A question along this line, please ignore if it is a distraction from rather than illustrative of the above: would anything like SNC apply if tech labs were somehow using bioengineering to create creatures to perform the kinds of tasks that would be done by advanced AI?
would anything like SNC apply if tech labs were somehow using bioengineering to create creatures to perform the kinds of tasks that would be done by advanced AI?
In that case, substrate-needs convergence would not apply, or only apply to a limited extent.
There is still a concern about what those bio-engineered creatures, used in practice as slaves to automate our intellectual and physical work, would bring about over the long-term.
If there is a successful attempt by them to ‘upload’ their cognition onto networked machinery, then we’re stuck with the substrate-needs convergence problem again.
To be clear, the sole reason I assumed (initial) alignment in this post is because if there is an unaligned ASI then we probably all die for reasons that don’t require SNC (though SNC might have a role in the specifics of how the really bad outcome plays out). So “aligned” here basically means: powerful enough to be called an ASI and won’t kill everyone if SNC is false (and not controlled/misused by bad actors, etc.)
> And the artificiality itself is the problem.
This sounds like a pretty central point that I did not explore very much except for some intuitive statements at the end (the bulk of the post summarizing the “fundamental limits of control” argument), I’d be interested in hearing more about this. I think I get (and hopefully roughly conveyed) the idea that AI has different needs from its environment than humans, so if it optimizes the environment in service of those needs we die...but I get the sense that there is something deeper intended here.
A question along this line, please ignore if it is a distraction from rather than illustrative of the above: would anything like SNC apply if tech labs were somehow using bioengineering to create creatures to perform the kinds of tasks that would be done by advanced AI?
In that case, substrate-needs convergence would not apply, or only apply to a limited extent.
There is still a concern about what those bio-engineered creatures, used in practice as slaves to automate our intellectual and physical work, would bring about over the long-term.
If there is a successful attempt by them to ‘upload’ their cognition onto networked machinery, then we’re stuck with the substrate-needs convergence problem again.