I don’t have good ideas here, but something that results in increasing the average Lawfulness among humans seems like a good start. Maybe step 0 of this is writing some kind of Law textbook or Sequences 2.0 or CFAR 2.0 curriculum, so people can pick up the concepts explicitly from more than just, like, reading glowfic and absorbing it by osmosis. (In planecrash terms, Coordination is a fragment of Law that follows from Validity, Utility, and Decision.)
This kind of stuff is but a dream that won’t work. It’s just not pragmatic imo. You’re not gonna get people to read the right sequence of words and then solve coordination. There’s this dream of raising the sanity waterline, but I just don’t see it panning out anytime soon. I also don’t see widespread intelligence-enhancing gene therapy happening anytime soon (people are scared of vaccines, let alone gene therapy that people have an ick feeling for due to eugenics history…).
I agree making these changes on a society-wide scale seems intractable within a 5 year time frame on transformative AI (i.e. by 2027).
Note that these are plans that don’t need lots of people to be on board with them. Just a select few. Raise the sanity waterline not of the majority of society, but just of some of its top thinkers who are currently not working on AI alignment, and they may start working AI alignment and may have a large impact. Come up with a gene modification therapy, which substantially increases the intelligence even of already smart people, and find a few brave AI alignment researcher volunteers to take it secretly, and again you may have a large impact on AI alignment research progress rates. Although, I really doubt that a successful and highly impactful gene therapy could be developed even in just small animal models in a 5 year time frame.
I agree that if you hyper-focus on a very small number of people, the plan would be much more tractable (though still not really tractable in my view).
My stance is that you will get significantly more leverage if you focus on leveraging AIs to augment alignment researchers than if you try to make them a little smarter via reading a document / gene therapy. One of my agendas focuses on the former, so that’s probably worth factoring in.
This kind of stuff is but a dream that won’t work. It’s just not pragmatic imo. You’re not gonna get people to read the right sequence of words and then solve coordination. There’s this dream of raising the sanity waterline, but I just don’t see it panning out anytime soon. I also don’t see widespread intelligence-enhancing gene therapy happening anytime soon (people are scared of vaccines, let alone gene therapy that people have an ick feeling for due to eugenics history…).
I agree making these changes on a society-wide scale seems intractable within a 5 year time frame on transformative AI (i.e. by 2027).
Note that these are plans that don’t need lots of people to be on board with them. Just a select few. Raise the sanity waterline not of the majority of society, but just of some of its top thinkers who are currently not working on AI alignment, and they may start working AI alignment and may have a large impact. Come up with a gene modification therapy, which substantially increases the intelligence even of already smart people, and find a few brave AI alignment researcher volunteers to take it secretly, and again you may have a large impact on AI alignment research progress rates. Although, I really doubt that a successful and highly impactful gene therapy could be developed even in just small animal models in a 5 year time frame.
I agree that if you hyper-focus on a very small number of people, the plan would be much more tractable (though still not really tractable in my view).
My stance is that you will get significantly more leverage if you focus on leveraging AIs to augment alignment researchers than if you try to make them a little smarter via reading a document / gene therapy. One of my agendas focuses on the former, so that’s probably worth factoring in.