On second thought: Don’t we have orgs that work on AI governance/policy? I would expect them to have more likely the skills/expertise to pull this off, right?
So, here’s a thing that I don’t think exists yet (or, at least, it doesn’t exist enough that I know about it to link it to you). Who’s out there, what ‘areas of responsibility’ do they think they have, what ‘areas of responsibility’ do they not want to have, what are the holes in the overall space? It probably is the case that there are lots of orgs that work on AI governance/policy, and each of them probably is trying to consider a narrow corner of space, instead of trying to hold ‘all of it’.
So if someone says “I have an idea how we should regulate medical AI stuff—oh, CSET already exists, I should leave it to them”, CSET’s response will probably be “what? We focus solely on national security implications of AI stuff, medical regulation is not on our radar, let alone a place we don’t want competition.”
I should maybe note here there’s a common thing I see in EA spaces that only sometimes make sense, and so I want to point at it so that people can deliberately decide whether or not to do it. In selfish, profit-driven worlds, competition is the obvious thing to do; when someone else has discovered that you can make profits by selling lemonade, you should maybe also try to sell lemonade to get some of those profits, instead of saying “ah, they have lemonade handled.” In altruistic, overall-success-driven worlds, competition is the obvious thing to avoid; there are so many undone tasks that you should try to find a task that no one is working on, and then work on that.
One downside is this means the eventual allocation of institutions / people to roles is hugely driven by inertia and ‘who showed up when that was the top item in the queue’ instead of ‘who is the best fit now’. [This can be sensible if everyone ‘came in as a generalist’ and had to skill up from scratch, but still seems sort of questionable; even if people are generalists when it comes to skills, they’re probably not generalists when it comes to personality.]
Another downside is that probably it makes more sense to have a second firm attempting to solve the biggest problem before you get a first firm attempting to solve the twelfth biggest problem. Having a sense of the various values of the different approaches—and how much they depend on each other, or on things that don’t exist yet—might be useful.
On second thought: Don’t we have orgs that work on AI governance/policy? I would expect them to have more likely the skills/expertise to pull this off, right?
So, here’s a thing that I don’t think exists yet (or, at least, it doesn’t exist enough that I know about it to link it to you). Who’s out there, what ‘areas of responsibility’ do they think they have, what ‘areas of responsibility’ do they not want to have, what are the holes in the overall space? It probably is the case that there are lots of orgs that work on AI governance/policy, and each of them probably is trying to consider a narrow corner of space, instead of trying to hold ‘all of it’.
So if someone says “I have an idea how we should regulate medical AI stuff—oh, CSET already exists, I should leave it to them”, CSET’s response will probably be “what? We focus solely on national security implications of AI stuff, medical regulation is not on our radar, let alone a place we don’t want competition.”
I should maybe note here there’s a common thing I see in EA spaces that only sometimes make sense, and so I want to point at it so that people can deliberately decide whether or not to do it. In selfish, profit-driven worlds, competition is the obvious thing to do; when someone else has discovered that you can make profits by selling lemonade, you should maybe also try to sell lemonade to get some of those profits, instead of saying “ah, they have lemonade handled.” In altruistic, overall-success-driven worlds, competition is the obvious thing to avoid; there are so many undone tasks that you should try to find a task that no one is working on, and then work on that.
One downside is this means the eventual allocation of institutions / people to roles is hugely driven by inertia and ‘who showed up when that was the top item in the queue’ instead of ‘who is the best fit now’. [This can be sensible if everyone ‘came in as a generalist’ and had to skill up from scratch, but still seems sort of questionable; even if people are generalists when it comes to skills, they’re probably not generalists when it comes to personality.]
Another downside is that probably it makes more sense to have a second firm attempting to solve the biggest problem before you get a first firm attempting to solve the twelfth biggest problem. Having a sense of the various values of the different approaches—and how much they depend on each other, or on things that don’t exist yet—might be useful.