You’re welcome. I’m happy to read that people find it interesting.
Regarding Ed, your explanation seems different than what I recall hearing him says. But, I’ll follow my rule: I won’t repeat what a person said unless it was given in a public talk. I’ll let him know we mention him, and if he want, he’ll answer himself.
I’d just states that, honestly, I would have loved to collaborate with Ed. He had really interesting discussions. However, he already has an assistant, I have not heard anything making me believes he would need a second one.
Congratulations on ending my long-time LW lurker status and prompting me to comment for once. =)
I think Ben’s comment hits pretty close to the state of affairs. I have been internalizing MIRI’s goals and looking for obstacles in the surrounding research space that I can knock down to make their (our? our) work go more smoothly, either in the form of subgoals or backwards chaining required capabilities to get a sense of how to proceed.
Why do I work around the edges? Mostly because if I take the vector I’m trying to push the world and the direction MIRI is trying to push the world, and project one onto the other, it currently seems to be the approach that maximizes the dot product. Some of what I want to do doesn’t seem to work behind closed doors, because I need a lot more outside feedback, or have projects that just need to bake more before being turned into a more product-like direction.
Sometimes I lend a hand behind the curtain where one of their projects comes close to something I have experience to offer, other times, its trying to offer a reframing of the problem, or offering libraries or connections to folks that they can built atop.
I can say that I’ve very much enjoyed working with MIRI over the last 16 months or so, and I think the relationship has been mutually beneficial. Nate has given me a lot of latitude in choosing what to work on and how I can contribute, and I have to admit he couldn’t have come up with a more effective way to leash me to his cause had he tried.
I’ve attend so many of these AIRCS workshops in part to better understand how to talk about AI safety. (I think I’ve been to something like 8-10 of them so far?) For better or worse, I have a reputation I have in the outside functional programming community, and I’d hate to give folks the wrong impression of MIRI, simply by dint of my having done insufficient homework, so I’ve been using this as a way to sharpen my arguments and gather a more nuanced understanding of what ways to talk about AI safety, rationality, 80,000 hours, EA, x-risks, etc. work for my kind of audience, and which seem to fall short, especially when talking to the kind of hardcore computer scientists and mathematicians I tend to talk to.
Arthur, I greatly enjoyed interacting with you at the workshop—who knew that an expert on logic and automata theory was exactly what I needed at that moment!? -- and I’m sorry that the MIRI recruitment attempt didn’t move forward.
You’re welcome. I’m happy to read that people find it interesting.
Regarding Ed, your explanation seems different than what I recall hearing him says. But, I’ll follow my rule: I won’t repeat what a person said unless it was given in a public talk. I’ll let him know we mention him, and if he want, he’ll answer himself. I’d just states that, honestly, I would have loved to collaborate with Ed. He had really interesting discussions. However, he already has an assistant, I have not heard anything making me believes he would need a second one.
Congratulations on ending my long-time LW lurker status and prompting me to comment for once. =)
I think Ben’s comment hits pretty close to the state of affairs. I have been internalizing MIRI’s goals and looking for obstacles in the surrounding research space that I can knock down to make their (our? our) work go more smoothly, either in the form of subgoals or backwards chaining required capabilities to get a sense of how to proceed.
Why do I work around the edges? Mostly because if I take the vector I’m trying to push the world and the direction MIRI is trying to push the world, and project one onto the other, it currently seems to be the approach that maximizes the dot product. Some of what I want to do doesn’t seem to work behind closed doors, because I need a lot more outside feedback, or have projects that just need to bake more before being turned into a more product-like direction.
Sometimes I lend a hand behind the curtain where one of their projects comes close to something I have experience to offer, other times, its trying to offer a reframing of the problem, or offering libraries or connections to folks that they can built atop.
I can say that I’ve very much enjoyed working with MIRI over the last 16 months or so, and I think the relationship has been mutually beneficial. Nate has given me a lot of latitude in choosing what to work on and how I can contribute, and I have to admit he couldn’t have come up with a more effective way to leash me to his cause had he tried.
I’ve attend so many of these AIRCS workshops in part to better understand how to talk about AI safety. (I think I’ve been to something like 8-10 of them so far?) For better or worse, I have a reputation I have in the outside functional programming community, and I’d hate to give folks the wrong impression of MIRI, simply by dint of my having done insufficient homework, so I’ve been using this as a way to sharpen my arguments and gather a more nuanced understanding of what ways to talk about AI safety, rationality, 80,000 hours, EA, x-risks, etc. work for my kind of audience, and which seem to fall short, especially when talking to the kind of hardcore computer scientists and mathematicians I tend to talk to.
Arthur, I greatly enjoyed interacting with you at the workshop—who knew that an expert on logic and automata theory was exactly what I needed at that moment!? -- and I’m sorry that the MIRI recruitment attempt didn’t move forward.
Cool, all seems good (and happy to be corrected) :-)