I’d ask the question whether things typically are aligned or not.
Just out of interest, how exactly would you ask that question?
There’s a good argument that many systems are not aligned.
Certainly. This is a big issue in our time. Something needs to be done or things may really go off the rails.
Ecosystems, society, companies, families, etc all often have very unaligned agents.
Indeed. Is there anything that can be done?
AI alignment, as you pointed out, is a higher stakes game.
It is a very high-stakes game. How might we proceed?
Just out of interest, how exactly would you ask that question?
Certainly. This is a big issue in our time. Something needs to be done or things may really go off the rails.
Indeed. Is there anything that can be done?
It is a very high-stakes game. How might we proceed?