Downvote for including unsubstantiated claims as part of your headlines and not even trying to back them up. “Alignment is a solvable problem”...? (Maybe...? Probably...? Hopefully...? But who knows, except in some irrelevant academic sense.)
I like the general tone, but things like this discourage me from reading any further.
I am hopeful despite suspecting that there is no solution to the hard technical core of alignment that holds through arbitrary levels of intelligence increase. I think if we can get a hacky good-enough alignment at just a bit beyond human-level, we can use that tool, along with government enforcement, to prevent anyone from making a stronger rogue AI.
I think that’s fair Amalthea. However I think it’s worth encouraging people with the approximate right orientation towards the problem, even if their technical grasp is of it is not yet refined. I’m not sure this forum is the best place for a flood of recently-become-aware people to speak out in favor of trying hard to keep us from being doomed. But on the other hand, I don’t have an alternate location in mind for them to start on the journey of learning… So....
I don’t have an issue with the general purpose of the post. I do think it’s not great to simply state things as true (and in a way that could easily be misinterpreted as spoken from expertise), which simply are not known, and for which the OP doesn’t have any strong evidence. To be fair, I have similar issues with some of Eliezer’s remarks, but at least he has done the work of going through every possible counter argument he can think of.
Downvote for including unsubstantiated claims as part of your headlines and not even trying to back them up. “Alignment is a solvable problem”...? (Maybe...? Probably...? Hopefully...? But who knows, except in some irrelevant academic sense.) I like the general tone, but things like this discourage me from reading any further.
I know you deleted this, but I personally do believe it is worth noting that there is no evidence that alignment is a solvable problem.
I am hopeful despite suspecting that there is no solution to the hard technical core of alignment that holds through arbitrary levels of intelligence increase. I think if we can get a hacky good-enough alignment at just a bit beyond human-level, we can use that tool, along with government enforcement, to prevent anyone from making a stronger rogue AI.
I’m worried I didn’t strike a good tone, but un-deleted it for what it’s worth.
I think that’s fair Amalthea. However I think it’s worth encouraging people with the approximate right orientation towards the problem, even if their technical grasp is of it is not yet refined. I’m not sure this forum is the best place for a flood of recently-become-aware people to speak out in favor of trying hard to keep us from being doomed. But on the other hand, I don’t have an alternate location in mind for them to start on the journey of learning… So....
I don’t have an issue with the general purpose of the post. I do think it’s not great to simply state things as true (and in a way that could easily be misinterpreted as spoken from expertise), which simply are not known, and for which the OP doesn’t have any strong evidence. To be fair, I have similar issues with some of Eliezer’s remarks, but at least he has done the work of going through every possible counter argument he can think of.
Yes, I think that’s a fair critique.