Provided your work stays within the boundary of safe stuff, or stuff that is already very well known, asking around in public should be fine.
If you’re working with questionable stuff that isn’t well known, that does get trickier. One strategy is to just… not work on that kind of thing. I’ve dropped a few research avenues for exactly that reason.
Other than that, getting to know people in the field or otherwise establishing some kind of working relationship could be useful. More organized versions of this could look like Refine, AI Safety Camp, SERI MATS, or maybe if you get a grantsomewhere, you could try talking to someone at the organization about your research path.
And as long as you’re generally polite, not too pushy, and not asking too much, you’ll probably find a lot of people willing to respond to DMs or e-mails. Might as well let them make the decision that they don’t want to spend the time to respond rather than assuming it ahead of time. (I’d be willing to try answering questions now and again, but… I am by no means an authority in this field. I only very recently got a grant to start working on this for realsies.)
It would be really nice to figure out something to cover this use case in a more organized way that wouldn’t require the kinds of commitments that mentorships imply. I’m kind of wondering about just setting up a registry of ‘hey I know things and I’m willing to answer questions sometimes’ people. Might already exist somewhere.
[I also just got funded (FTX) to work on this for realsies 😸🙀 ]
I’m still in “learn the field” mode, I didn’t pick any direction to dive into, but I am asking myself questions like “how would someone armed with a pretty strong AI take over the world?”.
Regarding commitment from the mentor: My current format is “live blogging” in a Slack channel. A mentor could look whenever they want, and comment only on whatever they want to. wdyt?
(But I don’t know who to add to such a channel which would also contain the potentially harmful ideas)
[I also just got funded (FTX) to work on this for realsies 😸🙀 ]
Congratulations and welcome :D
A mentor could look whenever they want, and comment only on whatever they want to. wdyt?
Sounds reasonable- I’m not actually all that familiar with Slack features, but if it’s a pure sequential chatlog, there may be some value in using something that has a more forum-y layout with threaded topics. I’ve considered using github for this purpose since it’s got a bunch of collaboration stuff combined with free private repos and permissions management.
Still don’t know what to do on the potentially dangerous side of things, though. Getting advice about that sort of thing tends to require both knowledge and a particular type of trustworthiness, and there just aren’t a lot of humans in that subset available for frequent pokes. And for particularly spooky stuff, I would lean towards only trusting E2EE services, though that kind of thing should be rare.
Provided your work stays within the boundary of safe stuff, or stuff that is already very well known, asking around in public should be fine.
If you’re working with questionable stuff that isn’t well known, that does get trickier. One strategy is to just… not work on that kind of thing. I’ve dropped a few research avenues for exactly that reason.
Other than that, getting to know people in the field or otherwise establishing some kind of working relationship could be useful. More organized versions of this could look like Refine, AI Safety Camp, SERI MATS, or maybe if you get a grant somewhere, you could try talking to someone at the organization about your research path.
And as long as you’re generally polite, not too pushy, and not asking too much, you’ll probably find a lot of people willing to respond to DMs or e-mails. Might as well let them make the decision that they don’t want to spend the time to respond rather than assuming it ahead of time. (I’d be willing to try answering questions now and again, but… I am by no means an authority in this field. I only very recently got a grant to start working on this for realsies.)
It would be really nice to figure out something to cover this use case in a more organized way that wouldn’t require the kinds of commitments that mentorships imply. I’m kind of wondering about just setting up a registry of ‘hey I know things and I’m willing to answer questions sometimes’ people. Might already exist somewhere.
[I also just got funded (FTX) to work on this for realsies 😸🙀 ]
I’m still in “learn the field” mode, I didn’t pick any direction to dive into, but I am asking myself questions like “how would someone armed with a pretty strong AI take over the world?”.
Regarding commitment from the mentor: My current format is “live blogging” in a Slack channel. A mentor could look whenever they want, and comment only on whatever they want to. wdyt?
(But I don’t know who to add to such a channel which would also contain the potentially harmful ideas)
Congratulations and welcome :D
Sounds reasonable- I’m not actually all that familiar with Slack features, but if it’s a pure sequential chatlog, there may be some value in using something that has a more forum-y layout with threaded topics. I’ve considered using github for this purpose since it’s got a bunch of collaboration stuff combined with free private repos and permissions management.
Still don’t know what to do on the potentially dangerous side of things, though. Getting advice about that sort of thing tends to require both knowledge and a particular type of trustworthiness, and there just aren’t a lot of humans in that subset available for frequent pokes. And for particularly spooky stuff, I would lean towards only trusting E2EE services, though that kind of thing should be rare.