Not sure if I’m the right person, but it seems worth thinking about how one would maybe approach this if one were to do it.
So the idea is to have an AI-Alignment PR/Social Media org/group/NGO/think tank/company that has the goal to contribute to a world with a more diverse set of high-quality ideas about how to safely align powerful AI. The only other organization roughly in this space that I can think of would be 80,000 hours, which is also somewhat more general in its goals and more conservative in its strategies.
I’m not a sales/marketing person, but as I understand it, the usual metaphor to use here is a funnel?
Starting with maybe ads / sponsoring trying to reach the right people[0] (e.g. I saw Jane Street sponsor Matt Parker)
then more and more narrowing down first with introducing people to why this is an issue (orthogonality, instrumental convergence)
hopefully having them realize for themselves, guided by arguments, that this is an issue that genuinely needs solving and maybe their skills would be useful
increasing the math as needed
finally, somehow selecting for self-reliance and providing a path for how to get started with thinking about this problem by themselves / model building / independent research
or otherwise improving the overall situation (convince your congress member of something? run for congress? …)
Probably that would include copy writing (or hiring copywriters or contracting them) to go over a number of our documents to make them more digestible and actionable.
So, I’m probably not the right person to get this off the ground, because I don’t have a clue about any of this (not even entrepreneurship in general), but it does seem like a thing worth doing and maybe like an initiative that would get funding from whoever funds such things these days?
[0] Though, maybe we should also look into a better understanding about who “the right people” are? Given that our current bunch of ML researchers/physicists/mathematicians were not able to solve it, maybe it would be time to consider broadening our net in a somehow responsible way.
On second thought: Don’t we have orgs that work on AI governance/policy? I would expect them to have more likely the skills/expertise to pull this off, right?
So, here’s a thing that I don’t think exists yet (or, at least, it doesn’t exist enough that I know about it to link it to you). Who’s out there, what ‘areas of responsibility’ do they think they have, what ‘areas of responsibility’ do they not want to have, what are the holes in the overall space? It probably is the case that there are lots of orgs that work on AI governance/policy, and each of them probably is trying to consider a narrow corner of space, instead of trying to hold ‘all of it’.
So if someone says “I have an idea how we should regulate medical AI stuff—oh, CSET already exists, I should leave it to them”, CSET’s response will probably be “what? We focus solely on national security implications of AI stuff, medical regulation is not on our radar, let alone a place we don’t want competition.”
I should maybe note here there’s a common thing I see in EA spaces that only sometimes make sense, and so I want to point at it so that people can deliberately decide whether or not to do it. In selfish, profit-driven worlds, competition is the obvious thing to do; when someone else has discovered that you can make profits by selling lemonade, you should maybe also try to sell lemonade to get some of those profits, instead of saying “ah, they have lemonade handled.” In altruistic, overall-success-driven worlds, competition is the obvious thing to avoid; there are so many undone tasks that you should try to find a task that no one is working on, and then work on that.
One downside is this means the eventual allocation of institutions / people to roles is hugely driven by inertia and ‘who showed up when that was the top item in the queue’ instead of ‘who is the best fit now’. [This can be sensible if everyone ‘came in as a generalist’ and had to skill up from scratch, but still seems sort of questionable; even if people are generalists when it comes to skills, they’re probably not generalists when it comes to personality.]
Another downside is that probably it makes more sense to have a second firm attempting to solve the biggest problem before you get a first firm attempting to solve the twelfth biggest problem. Having a sense of the various values of the different approaches—and how much they depend on each other, or on things that don’t exist yet—might be useful.
🤔
Not sure if I’m the right person, but it seems worth thinking about how one would maybe approach this if one were to do it.
So the idea is to have an AI-Alignment PR/Social Media org/group/NGO/think tank/company that has the goal to contribute to a world with a more diverse set of high-quality ideas about how to safely align powerful AI. The only other organization roughly in this space that I can think of would be 80,000 hours, which is also somewhat more general in its goals and more conservative in its strategies.
I’m not a sales/marketing person, but as I understand it, the usual metaphor to use here is a funnel?
Starting with maybe ads / sponsoring trying to reach the right people[0] (e.g. I saw Jane Street sponsor Matt Parker)
then more and more narrowing down first with introducing people to why this is an issue (orthogonality, instrumental convergence)
hopefully having them realize for themselves, guided by arguments, that this is an issue that genuinely needs solving and maybe their skills would be useful
increasing the math as needed
finally, somehow selecting for self-reliance and providing a path for how to get started with thinking about this problem by themselves / model building / independent research
or otherwise improving the overall situation (convince your congress member of something? run for congress? …)
Probably that would include copy writing (or hiring copywriters or contracting them) to go over a number of our documents to make them more digestible and actionable.
So, I’m probably not the right person to get this off the ground, because I don’t have a clue about any of this (not even entrepreneurship in general), but it does seem like a thing worth doing and maybe like an initiative that would get funding from whoever funds such things these days?
[0] Though, maybe we should also look into a better understanding about who “the right people” are? Given that our current bunch of ML researchers/physicists/mathematicians were not able to solve it, maybe it would be time to consider broadening our net in a somehow responsible way.
On second thought: Don’t we have orgs that work on AI governance/policy? I would expect them to have more likely the skills/expertise to pull this off, right?
So, here’s a thing that I don’t think exists yet (or, at least, it doesn’t exist enough that I know about it to link it to you). Who’s out there, what ‘areas of responsibility’ do they think they have, what ‘areas of responsibility’ do they not want to have, what are the holes in the overall space? It probably is the case that there are lots of orgs that work on AI governance/policy, and each of them probably is trying to consider a narrow corner of space, instead of trying to hold ‘all of it’.
So if someone says “I have an idea how we should regulate medical AI stuff—oh, CSET already exists, I should leave it to them”, CSET’s response will probably be “what? We focus solely on national security implications of AI stuff, medical regulation is not on our radar, let alone a place we don’t want competition.”
I should maybe note here there’s a common thing I see in EA spaces that only sometimes make sense, and so I want to point at it so that people can deliberately decide whether or not to do it. In selfish, profit-driven worlds, competition is the obvious thing to do; when someone else has discovered that you can make profits by selling lemonade, you should maybe also try to sell lemonade to get some of those profits, instead of saying “ah, they have lemonade handled.” In altruistic, overall-success-driven worlds, competition is the obvious thing to avoid; there are so many undone tasks that you should try to find a task that no one is working on, and then work on that.
One downside is this means the eventual allocation of institutions / people to roles is hugely driven by inertia and ‘who showed up when that was the top item in the queue’ instead of ‘who is the best fit now’. [This can be sensible if everyone ‘came in as a generalist’ and had to skill up from scratch, but still seems sort of questionable; even if people are generalists when it comes to skills, they’re probably not generalists when it comes to personality.]
Another downside is that probably it makes more sense to have a second firm attempting to solve the biggest problem before you get a first firm attempting to solve the twelfth biggest problem. Having a sense of the various values of the different approaches—and how much they depend on each other, or on things that don’t exist yet—might be useful.
...yet!