I am hopeful despite suspecting that there is no solution to the hard technical core of alignment that holds through arbitrary levels of intelligence increase. I think if we can get a hacky good-enough alignment at just a bit beyond human-level, we can use that tool, along with government enforcement, to prevent anyone from making a stronger rogue AI.
I know you deleted this, but I personally do believe it is worth noting that there is no evidence that alignment is a solvable problem.
I am hopeful despite suspecting that there is no solution to the hard technical core of alignment that holds through arbitrary levels of intelligence increase. I think if we can get a hacky good-enough alignment at just a bit beyond human-level, we can use that tool, along with government enforcement, to prevent anyone from making a stronger rogue AI.
I’m worried I didn’t strike a good tone, but un-deleted it for what it’s worth.