I agree making these changes on a society-wide scale seems intractable within a 5 year time frame on transformative AI (i.e. by 2027).
Note that these are plans that don’t need lots of people to be on board with them. Just a select few. Raise the sanity waterline not of the majority of society, but just of some of its top thinkers who are currently not working on AI alignment, and they may start working AI alignment and may have a large impact. Come up with a gene modification therapy, which substantially increases the intelligence even of already smart people, and find a few brave AI alignment researcher volunteers to take it secretly, and again you may have a large impact on AI alignment research progress rates. Although, I really doubt that a successful and highly impactful gene therapy could be developed even in just small animal models in a 5 year time frame.
I agree that if you hyper-focus on a very small number of people, the plan would be much more tractable (though still not really tractable in my view).
My stance is that you will get significantly more leverage if you focus on leveraging AIs to augment alignment researchers than if you try to make them a little smarter via reading a document / gene therapy. One of my agendas focuses on the former, so that’s probably worth factoring in.
I agree making these changes on a society-wide scale seems intractable within a 5 year time frame on transformative AI (i.e. by 2027).
Note that these are plans that don’t need lots of people to be on board with them. Just a select few. Raise the sanity waterline not of the majority of society, but just of some of its top thinkers who are currently not working on AI alignment, and they may start working AI alignment and may have a large impact. Come up with a gene modification therapy, which substantially increases the intelligence even of already smart people, and find a few brave AI alignment researcher volunteers to take it secretly, and again you may have a large impact on AI alignment research progress rates. Although, I really doubt that a successful and highly impactful gene therapy could be developed even in just small animal models in a 5 year time frame.
I agree that if you hyper-focus on a very small number of people, the plan would be much more tractable (though still not really tractable in my view).
My stance is that you will get significantly more leverage if you focus on leveraging AIs to augment alignment researchers than if you try to make them a little smarter via reading a document / gene therapy. One of my agendas focuses on the former, so that’s probably worth factoring in.