Some things I’d especially like to see change (in as much as I know what is happening) are:
Making more use of available options to improve AI safety (I think there are more than I get the impression that Anthropic thinks. For instance, 30% of funds could be allocated to AI safety research if framed well and it would probably be below the noise threshold/froth of VC investing. Also, there probably is a fair degree of freedom in socially promoting concern around unaligned AGI.)
Explicit ways to handle various types of events like organizational value drift, hostile government takeover, organization get’s sold or unaligned investors have control, another AGI company takes a clear lead
Enforceable agreements to, under some AGI safety situations, not race and pool resources (a possible analogy from nuclear safety is having a no first strike policy)
Allocate a significant fraction of resources (like > 10% of capital) to AGI technical safety, organizational AGI safety strategy, and AGI governance
An organization consists of its people and great care needs to be taken in hiring employees and and their training and motivation for AGI safety. If not, I expect Anthropic to regress towards the mean (via an eternal September) and we’ll end up with another OpenAI situation where AGI safety culture is gradually lost. I want more work to be done here. (see also “Carefully Bootstrapped Alignment” is organizationally hard)
The owners of a company are also very important and ensuring that the LTBT has teeth and the members are selected well is key. Furthermore, preferential allocation of voting stock towards AGI algned investors should happen. Teaching investors about the company and what it does, including AGI safety issues, would be good to do. More speculatively, you can have various types of voting stock for various types of issues and you could build a system around this.
More generally you can use the following typology to inspire creating more interventions.
Interventions points to change/form an AGI company and its surroundings towards safer x-risk results (I’ve used this in advising startups on AI safety, it is also related to my post on positions where people can be in the loop):
Type of organization: nonprofit, public benefit organization, have a partner non-profit, join the government
Rules of organization, event triggers:
Rules:
x-risk mission statement
x-risk strategic plan
Triggering events:
Gets very big: windfall clause
Gets sold to another party: ethics board, restrictions on potential sale
Value drift: reboot board and CEOs, shut it down, allocate more resources to safety, build a new company, put the ethics board in charge, build a monitoring system, some sort of line in the sand
AI safety isn’t viable yet but dangerous AGI is: shut it down or pivot to sub AGI research and product development
Path decisions for organization: ethics board, aligned investors, good CEOs, giving x-risk orgs or people choice power, voting stock to aligned investors, periodic x-risk safety reminders
Resource allocation by organization: precommitting a varying percentage of money/time focused on x-risk reduction based on conditions with some up front, a commitment devices for funding allocation into the future
Owners of organization: aligned investors, voting stock for aligned investors, necessary percentage as aligned investors
Executive decision making: good CEOs, company mission statement?, company strategic plan?
Employees: select employees preferably by alignment, have only aligned people hire folks
Education of employees and/or investors by x-risk folks: employee training in x-risks and information hazards, a company culture that takes doing good seriously, coaching and therapy services
Social environment of employees: exposure to EAs and x-risk people socially at events, x-risk community support grants, a public pledge
Customers of organization: safety score for customers, differential pricing, customers have safety plans and information hazard plans
Uses of the technology: terms of service
Suppliers of organization: (mostly not relevant), select ethical or aligned suppliers
Difficulty to steal or copy: trade secrets, patents, service based, NDAs, (physical security)
Internal political hazards: (standard)
Information hazards: an institutional framework for research groups (FHI has a draft document)
Cyber hazards: (standard IT)
Financial hazards: (standard finances)
External political hazards: government industry partnerships, talk with x-risk folks about this, external x-risk outreach
Monitoring by x-risk folks: quarterly reports to x-risk organizations,
Projection by x-risk folks: commissioned projections, x-risk prediction market questions
Meta research and x-risk research: AI safety team, AI safety grants, meet up on organization safety at X-risk orgs, (x-risk strategy, AI safety strategy) – team and grants, information hazard grant question, go through these ideas in a check list fashion and allocate company computer folders to them (and they will get filled up), scalable and efficient grant giving system, form an accelerator, competitions, hackathon, BERI type project support
Coordination hazards: Incentivized coordination through cheap resources for joint projects, government industry partnerships, coordination theory and implementation grants, concrete coordination efforts, joint ethics boards, mergers with other groups to reduce arms race risks
Specific safety procedures: (depends on the project)
Thanks for asking the question!
Some things I’d especially like to see change (in as much as I know what is happening) are:
Making more use of available options to improve AI safety (I think there are more than I get the impression that Anthropic thinks. For instance, 30% of funds could be allocated to AI safety research if framed well and it would probably be below the noise threshold/froth of VC investing. Also, there probably is a fair degree of freedom in socially promoting concern around unaligned AGI.)
Explicit ways to handle various types of events like organizational value drift, hostile government takeover, organization get’s sold or unaligned investors have control, another AGI company takes a clear lead
Enforceable agreements to, under some AGI safety situations, not race and pool resources (a possible analogy from nuclear safety is having a no first strike policy)
Allocate a significant fraction of resources (like > 10% of capital) to AGI technical safety, organizational AGI safety strategy, and AGI governance
An organization consists of its people and great care needs to be taken in hiring employees and and their training and motivation for AGI safety. If not, I expect Anthropic to regress towards the mean (via an eternal September) and we’ll end up with another OpenAI situation where AGI safety culture is gradually lost. I want more work to be done here. (see also “Carefully Bootstrapped Alignment” is organizationally hard)
The owners of a company are also very important and ensuring that the LTBT has teeth and the members are selected well is key. Furthermore, preferential allocation of voting stock towards AGI algned investors should happen. Teaching investors about the company and what it does, including AGI safety issues, would be good to do. More speculatively, you can have various types of voting stock for various types of issues and you could build a system around this.
More generally you can use the following typology to inspire creating more interventions.
Interventions points to change/form an AGI company and its surroundings towards safer x-risk results (I’ve used this in advising startups on AI safety, it is also related to my post on positions where people can be in the loop):
Type of organization: nonprofit, public benefit organization, have a partner non-profit, join the government
Rules of organization, event triggers:
Rules:
x-risk mission statement
x-risk strategic plan
Triggering events:
Gets very big: windfall clause
Gets sold to another party: ethics board, restrictions on potential sale
Value drift: reboot board and CEOs, shut it down, allocate more resources to safety, build a new company, put the ethics board in charge, build a monitoring system, some sort of line in the sand
AI safety isn’t viable yet but dangerous AGI is: shut it down or pivot to sub AGI research and product development
Hostile government tries to take it over: shut it down, change countries, (see also: Soft Nationalization: How the US Government Will Control AI Labs)
Path decisions for organization: ethics board, aligned investors, good CEOs, giving x-risk orgs or people choice power, voting stock to aligned investors, periodic x-risk safety reminders
Resource allocation by organization: precommitting a varying percentage of money/time focused on x-risk reduction based on conditions with some up front, a commitment devices for funding allocation into the future
Owners of organization: aligned investors, voting stock for aligned investors, necessary percentage as aligned investors
Executive decision making: good CEOs, company mission statement?, company strategic plan?
Employees: select employees preferably by alignment, have only aligned people hire folks
Education of employees and/or investors by x-risk folks: employee training in x-risks and information hazards, a company culture that takes doing good seriously, coaching and therapy services
Social environment of employees: exposure to EAs and x-risk people socially at events, x-risk community support grants, a public pledge
Customers of organization: safety score for customers, differential pricing, customers have safety plans and information hazard plans
Uses of the technology: terms of service
Suppliers of organization: (mostly not relevant), select ethical or aligned suppliers
Difficulty to steal or copy: trade secrets, patents, service based, NDAs, (physical security)
Internal political hazards: (standard)
Information hazards: an institutional framework for research groups (FHI has a draft document)
Cyber hazards: (standard IT)
Financial hazards: (standard finances)
External political hazards: government industry partnerships, talk with x-risk folks about this, external x-risk outreach
Monitoring by x-risk folks: quarterly reports to x-risk organizations,
Projection by x-risk folks: commissioned projections, x-risk prediction market questions
Meta research and x-risk research: AI safety team, AI safety grants, meet up on organization safety at X-risk orgs, (x-risk strategy, AI safety strategy) – team and grants, information hazard grant question, go through these ideas in a check list fashion and allocate company computer folders to them (and they will get filled up), scalable and efficient grant giving system, form an accelerator, competitions, hackathon, BERI type project support
Coordination hazards: Incentivized coordination through cheap resources for joint projects, government industry partnerships, coordination theory and implementation grants, concrete coordination efforts, joint ethics boards, mergers with other groups to reduce arms race risks
Specific safety procedures: (depends on the project)
Jurisdiction: Choosing a good legal jurisdiction