My point is that complexity, no matter how objective a concept, is relative. Things we thought were “hard” or “complex” before, turn out to not be so much, now.
Still with me? Agree, disagree?
Patterns are a way of managing complexity, sorta, so perhaps if we see some patterns that work to ensure “human alignment[1]”, they will also work for “AI alignment” (tho mostly I think there is a wide wide berth betwixt the two, and the later can only exist after of the former).
We like to think we’re so much smarter than the humans that came before us, and that things — society, relationships, technology — are so much more complicated than they were before, but I believe a lot of that is just perception and bias.
If we do get to AGI and ASI, it’s going to be pretty dang cool to have a different perspective on it, and I for one do not fear the future.
Complexity is objectively quantifiable. I don’t think I understand your point. This is an example of where complexity is applied to specific domains.
My point is that complexity, no matter how objective a concept, is relative. Things we thought were “hard” or “complex” before, turn out to not be so much, now.
Still with me? Agree, disagree?
Patterns are a way of managing complexity, sorta, so perhaps if we see some patterns that work to ensure “human alignment[1]”, they will also work for “AI alignment” (tho mostly I think there is a wide wide berth betwixt the two, and the later can only exist after of the former).
We like to think we’re so much smarter than the humans that came before us, and that things — society, relationships, technology — are so much more complicated than they were before, but I believe a lot of that is just perception and bias.
If we do get to AGI and ASI, it’s going to be pretty dang cool to have a different perspective on it, and I for one do not fear the future.
assuming alignment is possible— “how strong of a consensus is needed?” etc.