Aren’t a lot of the doomy scenarios predicated on a single monolithic AI though? (Or multipolar AI that all agree to work togrther for naughtiness for some reason)
A bunch of them being tinkered on by lots of people seems like an easier path to alignment and as a failsafe in terms of power distribution.
You have lots of smaller scale dangers introduced but they certaintly dont seem to me to rise to the level of x or s risk in the near term.
What have we had thus far? A bunch of think tanks using deductive reasoning with no access to good models and a few monoliths with all the access. Seems to me that having the capability to actually run experiments at a community level will neccesarily boost efforts on alignment and value loading more than it assists actual AGI being born.
Aren’t a lot of the doomy scenarios predicated on a single monolithic AI though? (Or multipolar AI that all agree to work togrther for naughtiness for some reason)
A bunch of them being tinkered on by lots of people seems like an easier path to alignment and as a failsafe in terms of power distribution.
You have lots of smaller scale dangers introduced but they certaintly dont seem to me to rise to the level of x or s risk in the near term.
What have we had thus far? A bunch of think tanks using deductive reasoning with no access to good models and a few monoliths with all the access. Seems to me that having the capability to actually run experiments at a community level will neccesarily boost efforts on alignment and value loading more than it assists actual AGI being born.