I wonder if we could be much more effective in outreach to these groups?
Like making sure that Robert Miles is sufficiently funded to have a professional team +20% (if that is not already the case).
Maybe reaching out to Sabine Hossenfelder and sponsoring a video, or maybe collaborate with her for a video about this. Though I guess given her attitude towards the physics community, the work with her might be a gamble and two-edged sword.
Can we get market research on what influencers have a high number of followers of ML researches/physicists/mathematicians and then work with them / sponsor them?
Or maybe micro-target this demographic with facebook/google/github/stackexchange ads and point them to something?
I don’t know, I’m not a marketing person, but I feel like I would have seen much more of these things if we were doing enough of them.
Not saying that this should be MIRI’s job, rather stating that I’m confused because I feel like we as a community are not taking an action that would seem obvious to me. Especially given how recent advances in published AI capabilities seem to make the problem even much legible. Is the reason for not doing it really just that we’re all a bunch of nerds who are bad at this kind of thing, or is there more to it that I’m missing?
While I see that there is a lot of risk associated with such outreach increasing the amount of noise, I wonder if that tradeoff might be shifting the shorter the timelines are getting and given that we don’t seem to have better plans than “having a diverse set of smart people come up with novel ideas of their own in the hope that one of those works out”. So taking steps to entice a somewhat more diverse group of people into the conversation might be worth it?
Not saying that this should be MIRI’s job, rather stating that I’m confused because I feel like we as a community are not taking an action that would seem obvious to me.
I wrote about this a bit before, but in the current world my impression is that actually we’re pretty capacity-limited, and so the threshold is not “would be good to do” but “is better than my current top undone item”. If you see something that seems good to do that doesn’t have much in the way of unilateralist risk, you doing it is probably the right call. [How else is the field going to get more capacity?]
Not sure if I’m the right person, but it seems worth thinking about how one would maybe approach this if one were to do it.
So the idea is to have an AI-Alignment PR/Social Media org/group/NGO/think tank/company that has the goal to contribute to a world with a more diverse set of high-quality ideas about how to safely align powerful AI. The only other organization roughly in this space that I can think of would be 80,000 hours, which is also somewhat more general in its goals and more conservative in its strategies.
I’m not a sales/marketing person, but as I understand it, the usual metaphor to use here is a funnel?
Starting with maybe ads / sponsoring trying to reach the right people[0] (e.g. I saw Jane Street sponsor Matt Parker)
then more and more narrowing down first with introducing people to why this is an issue (orthogonality, instrumental convergence)
hopefully having them realize for themselves, guided by arguments, that this is an issue that genuinely needs solving and maybe their skills would be useful
increasing the math as needed
finally, somehow selecting for self-reliance and providing a path for how to get started with thinking about this problem by themselves / model building / independent research
or otherwise improving the overall situation (convince your congress member of something? run for congress? …)
Probably that would include copy writing (or hiring copywriters or contracting them) to go over a number of our documents to make them more digestible and actionable.
So, I’m probably not the right person to get this off the ground, because I don’t have a clue about any of this (not even entrepreneurship in general), but it does seem like a thing worth doing and maybe like an initiative that would get funding from whoever funds such things these days?
[0] Though, maybe we should also look into a better understanding about who “the right people” are? Given that our current bunch of ML researchers/physicists/mathematicians were not able to solve it, maybe it would be time to consider broadening our net in a somehow responsible way.
On second thought: Don’t we have orgs that work on AI governance/policy? I would expect them to have more likely the skills/expertise to pull this off, right?
So, here’s a thing that I don’t think exists yet (or, at least, it doesn’t exist enough that I know about it to link it to you). Who’s out there, what ‘areas of responsibility’ do they think they have, what ‘areas of responsibility’ do they not want to have, what are the holes in the overall space? It probably is the case that there are lots of orgs that work on AI governance/policy, and each of them probably is trying to consider a narrow corner of space, instead of trying to hold ‘all of it’.
So if someone says “I have an idea how we should regulate medical AI stuff—oh, CSET already exists, I should leave it to them”, CSET’s response will probably be “what? We focus solely on national security implications of AI stuff, medical regulation is not on our radar, let alone a place we don’t want competition.”
I should maybe note here there’s a common thing I see in EA spaces that only sometimes make sense, and so I want to point at it so that people can deliberately decide whether or not to do it. In selfish, profit-driven worlds, competition is the obvious thing to do; when someone else has discovered that you can make profits by selling lemonade, you should maybe also try to sell lemonade to get some of those profits, instead of saying “ah, they have lemonade handled.” In altruistic, overall-success-driven worlds, competition is the obvious thing to avoid; there are so many undone tasks that you should try to find a task that no one is working on, and then work on that.
One downside is this means the eventual allocation of institutions / people to roles is hugely driven by inertia and ‘who showed up when that was the top item in the queue’ instead of ‘who is the best fit now’. [This can be sensible if everyone ‘came in as a generalist’ and had to skill up from scratch, but still seems sort of questionable; even if people are generalists when it comes to skills, they’re probably not generalists when it comes to personality.]
Another downside is that probably it makes more sense to have a second firm attempting to solve the biggest problem before you get a first firm attempting to solve the twelfth biggest problem. Having a sense of the various values of the different approaches—and how much they depend on each other, or on things that don’t exist yet—might be useful.
I wonder if we could be much more effective in outreach to these groups?
Like making sure that Robert Miles is sufficiently funded to have a professional team +20% (if that is not already the case). Maybe reaching out to Sabine Hossenfelder and sponsoring a video, or maybe collaborate with her for a video about this. Though I guess given her attitude towards the physics community, the work with her might be a gamble and two-edged sword. Can we get market research on what influencers have a high number of followers of ML researches/physicists/mathematicians and then work with them / sponsor them?
Or maybe micro-target this demographic with facebook/google/github/stackexchange ads and point them to something?
I don’t know, I’m not a marketing person, but I feel like I would have seen much more of these things if we were doing enough of them.
Not saying that this should be MIRI’s job, rather stating that I’m confused because I feel like we as a community are not taking an action that would seem obvious to me. Especially given how recent advances in published AI capabilities seem to make the problem even much legible. Is the reason for not doing it really just that we’re all a bunch of nerds who are bad at this kind of thing, or is there more to it that I’m missing?
While I see that there is a lot of risk associated with such outreach increasing the amount of noise, I wonder if that tradeoff might be shifting the shorter the timelines are getting and given that we don’t seem to have better plans than “having a diverse set of smart people come up with novel ideas of their own in the hope that one of those works out”. So taking steps to entice a somewhat more diverse group of people into the conversation might be worth it?
I wrote about this a bit before, but in the current world my impression is that actually we’re pretty capacity-limited, and so the threshold is not “would be good to do” but “is better than my current top undone item”. If you see something that seems good to do that doesn’t have much in the way of unilateralist risk, you doing it is probably the right call. [How else is the field going to get more capacity?]
+1
🤔
Not sure if I’m the right person, but it seems worth thinking about how one would maybe approach this if one were to do it.
So the idea is to have an AI-Alignment PR/Social Media org/group/NGO/think tank/company that has the goal to contribute to a world with a more diverse set of high-quality ideas about how to safely align powerful AI. The only other organization roughly in this space that I can think of would be 80,000 hours, which is also somewhat more general in its goals and more conservative in its strategies.
I’m not a sales/marketing person, but as I understand it, the usual metaphor to use here is a funnel?
Starting with maybe ads / sponsoring trying to reach the right people[0] (e.g. I saw Jane Street sponsor Matt Parker)
then more and more narrowing down first with introducing people to why this is an issue (orthogonality, instrumental convergence)
hopefully having them realize for themselves, guided by arguments, that this is an issue that genuinely needs solving and maybe their skills would be useful
increasing the math as needed
finally, somehow selecting for self-reliance and providing a path for how to get started with thinking about this problem by themselves / model building / independent research
or otherwise improving the overall situation (convince your congress member of something? run for congress? …)
Probably that would include copy writing (or hiring copywriters or contracting them) to go over a number of our documents to make them more digestible and actionable.
So, I’m probably not the right person to get this off the ground, because I don’t have a clue about any of this (not even entrepreneurship in general), but it does seem like a thing worth doing and maybe like an initiative that would get funding from whoever funds such things these days?
[0] Though, maybe we should also look into a better understanding about who “the right people” are? Given that our current bunch of ML researchers/physicists/mathematicians were not able to solve it, maybe it would be time to consider broadening our net in a somehow responsible way.
On second thought: Don’t we have orgs that work on AI governance/policy? I would expect them to have more likely the skills/expertise to pull this off, right?
So, here’s a thing that I don’t think exists yet (or, at least, it doesn’t exist enough that I know about it to link it to you). Who’s out there, what ‘areas of responsibility’ do they think they have, what ‘areas of responsibility’ do they not want to have, what are the holes in the overall space? It probably is the case that there are lots of orgs that work on AI governance/policy, and each of them probably is trying to consider a narrow corner of space, instead of trying to hold ‘all of it’.
So if someone says “I have an idea how we should regulate medical AI stuff—oh, CSET already exists, I should leave it to them”, CSET’s response will probably be “what? We focus solely on national security implications of AI stuff, medical regulation is not on our radar, let alone a place we don’t want competition.”
I should maybe note here there’s a common thing I see in EA spaces that only sometimes make sense, and so I want to point at it so that people can deliberately decide whether or not to do it. In selfish, profit-driven worlds, competition is the obvious thing to do; when someone else has discovered that you can make profits by selling lemonade, you should maybe also try to sell lemonade to get some of those profits, instead of saying “ah, they have lemonade handled.” In altruistic, overall-success-driven worlds, competition is the obvious thing to avoid; there are so many undone tasks that you should try to find a task that no one is working on, and then work on that.
One downside is this means the eventual allocation of institutions / people to roles is hugely driven by inertia and ‘who showed up when that was the top item in the queue’ instead of ‘who is the best fit now’. [This can be sensible if everyone ‘came in as a generalist’ and had to skill up from scratch, but still seems sort of questionable; even if people are generalists when it comes to skills, they’re probably not generalists when it comes to personality.]
Another downside is that probably it makes more sense to have a second firm attempting to solve the biggest problem before you get a first firm attempting to solve the twelfth biggest problem. Having a sense of the various values of the different approaches—and how much they depend on each other, or on things that don’t exist yet—might be useful.
...yet!