I currently believe with fairly high confidence that AI Leviathan is the only plausibly workable approach which maintains the Bright Future (humanity expanding beyond Earth). I think a Butlerian Jihad moratorium must either eventually be revoked in order to settle other worlds, or fail over the long term due to lack of control maintenance.
I do think that a temporary moratorium to allow for time for AI alignment research is a reasonable plan, but should be explicit about being temporary and about needing substantial resources to be invested in AI alignment research during the pause. Additionally, a temporary moratorium would need a much more lax enforcement / control scheme. You could probably get by with controlling just data centers for maybe 10 or 20 years. No need to confiscate every personal computer in the world.
I don’t believe defensive acceleration is plausibly viable, due to specific concerns around the nature of known technologies. It’s possible this view could be changed upon discovery of new defensive technology. I don’t anticipate this, and think that many actions which lead towards defensive acceleration are actively counter-productive for pursuing an AI Leviathan. Thus, I would like to convince people to abandon the pursuit of defensive acceleration until such time as the technological strategic landscape shifts substantially in favor of defense.
I have lots of reasoning and research behind my views on this, and would enjoy having a thoughtful discussion with someone who sees this differently. I’ve enjoyed the discussions-via-LessWrong-mechanism that I’ve had so far.
I currently believe with fairly high confidence that AI Leviathan is the only plausibly workable approach which maintains the Bright Future (humanity expanding beyond Earth). I think a Butlerian Jihad moratorium must either eventually be revoked in order to settle other worlds, or fail over the long term due to lack of control maintenance.
I do think that a temporary moratorium to allow for time for AI alignment research is a reasonable plan, but should be explicit about being temporary and about needing substantial resources to be invested in AI alignment research during the pause. Additionally, a temporary moratorium would need a much more lax enforcement / control scheme. You could probably get by with controlling just data centers for maybe 10 or 20 years. No need to confiscate every personal computer in the world.
I don’t believe defensive acceleration is plausibly viable, due to specific concerns around the nature of known technologies. It’s possible this view could be changed upon discovery of new defensive technology. I don’t anticipate this, and think that many actions which lead towards defensive acceleration are actively counter-productive for pursuing an AI Leviathan. Thus, I would like to convince people to abandon the pursuit of defensive acceleration until such time as the technological strategic landscape shifts substantially in favor of defense.
I have lots of reasoning and research behind my views on this, and would enjoy having a thoughtful discussion with someone who sees this differently. I’ve enjoyed the discussions-via-LessWrong-mechanism that I’ve had so far.
Is humanity expanding beyond Earth a requirement or a goal in your world view?