Rough intuitive idea: intelligence/agency are about generalizable problem-solving capability. How do you incentivize generalizable problem-solving capability? Ask the system to solve a wide variety of problems, or a problem general enough to encompass a wide variety.
If you want an organization to act agenty, then a useful technique is to constantly force the organization to solve new, qualitatively different problems. An organization in a highly volatile market subject to lots of shocks or distribution shifts will likely develop some degree of agency naturally.
Organizations with an adversary (e.g. traders in the financial markets) will likely develop some degree of agency naturally, as their adversary frequently adopts new methods to counter the organization’s current strategy. Red teams are a good way to simulate this without a natural adversary.
Some organizations need to solve a sufficiently-broad range of problems as part of their original core business that they develop some degree of agency in the process. These organizations then find it relatively easy to expand into new lines of business. Amazon is a good example.
Conversely, businesses in stable industries facing little variability will end up with little agency. They can’t solve new problems efficiently, and will likely be wiped out if there’s a large shock or distribution shift in the market. They won’t be good at expanding or pivoting into new lines of business. They’ll tend to be adaptation-executors rather than profit-maximizers, to a much greater extent than agenty businesses.
This all also applies at a personal level: if you want to develop general problem-solving capability, then tackle a wide variety of problems. Try problems in many different fields. Try problems with an adversary. Try different kinds of problems, or problems with different levels of difficulty. Don’t just try to guess which skills or tools generalize well, go out and find out which skills or tools generalize well.
If we don’t know what to expect from future alignment problems, then developing problem-solving skills and organizations which generalize well is a natural strategy.
Relevant topic of a future post: some of the ideas from Risks From Learned Optimization or the Improved Good Regulator Theorem offer insights into building effective institutions and developing flexible problem-solving capacity.
Rough intuitive idea: intelligence/agency are about generalizable problem-solving capability. How do you incentivize generalizable problem-solving capability? Ask the system to solve a wide variety of problems, or a problem general enough to encompass a wide variety.
If you want an organization to act agenty, then a useful technique is to constantly force the organization to solve new, qualitatively different problems. An organization in a highly volatile market subject to lots of shocks or distribution shifts will likely develop some degree of agency naturally.
Organizations with an adversary (e.g. traders in the financial markets) will likely develop some degree of agency naturally, as their adversary frequently adopts new methods to counter the organization’s current strategy. Red teams are a good way to simulate this without a natural adversary.
Some organizations need to solve a sufficiently-broad range of problems as part of their original core business that they develop some degree of agency in the process. These organizations then find it relatively easy to expand into new lines of business. Amazon is a good example.
Conversely, businesses in stable industries facing little variability will end up with little agency. They can’t solve new problems efficiently, and will likely be wiped out if there’s a large shock or distribution shift in the market. They won’t be good at expanding or pivoting into new lines of business. They’ll tend to be adaptation-executors rather than profit-maximizers, to a much greater extent than agenty businesses.
This all also applies at a personal level: if you want to develop general problem-solving capability, then tackle a wide variety of problems. Try problems in many different fields. Try problems with an adversary. Try different kinds of problems, or problems with different levels of difficulty. Don’t just try to guess which skills or tools generalize well, go out and find out which skills or tools generalize well.
If we don’t know what to expect from future alignment problems, then developing problem-solving skills and organizations which generalize well is a natural strategy.