If an AI can be Aligned externally, then it’s already safe enough. It feels like...
You’re not talking about solving Alignment, but talking about some different problem. And I’m not sure what that problem is.
For your proposal to work, the problem needs to be already solved. All the hard/interesting parts need to be already solved.
I’m talking about the need for all AIs (and humans) to be bound by legal systems that include key consensus laws/ethics/values. It may seem obvious, but I think this position is under-appreciated and not universally accepted.
By focusing on the external legal system, many key problems associated with alignment (as recited in the Summary of Argument) are addressed. One worth highlighting is 4.4, which suggests AISVL can assure alignment in perpetuity despite changes in values, environmental conditions, and technologies, i.e., a practical implementation of Yudkowsky’s CEV.
Maybe you should edit the post to add something like this:
My proposal is not about the hardest parts of the Alignment problem. My proposal is not trying to solve theoretical problems with Inner Alignment or Outer Alignment (Goodhart, loopholes). I’m just assuming those problems won’t be relevant enough. Or humanity simply won’t create anything AGI-like (see CAIS).
Instead of discussing the usual problems in Alignment theory, I merely argue X. X is not a universally accepted claim, here’s evidence that it’s not universally accepted: [write the evidence here].
...
By focusing on the external legal system, many key problems associated with alignment (as recited in the Summary of Argument) are addressed. One worth highlighting is 4.4, which suggests AISVL can assure alignment in perpetuity despite changes in values, environmental conditions, and technologies, i.e., a practical implementation of Yudkowsky’s CEV.
I think the key problems are not “addressed”, you just assume they won’t exist. And laws are not a “practical implementation of CEV”.
I’m talking about the need for all AIs (and humans) to be bound by legal systems that include key consensus laws/ethics/values. It may seem obvious, but I think this position is under-appreciated and not universally accepted.
By focusing on the external legal system, many key problems associated with alignment (as recited in the Summary of Argument) are addressed. One worth highlighting is 4.4, which suggests AISVL can assure alignment in perpetuity despite changes in values, environmental conditions, and technologies, i.e., a practical implementation of Yudkowsky’s CEV.
Maybe you should edit the post to add something like this:
...
I think the key problems are not “addressed”, you just assume they won’t exist. And laws are not a “practical implementation of CEV”.