No I’m afraid you’re confusing me with someone else. I haven’t had the chance yet to see the fair city of Austin or attend AAAI, although I would like to. My current day job isn’t in the AI field so it would sadly be an unjustifiable expense.
To elaborate on the prior point, I have for some time engaged with not just yourself, but other MIRI-affiliated researchers as well as Nate and Luke before him. MIRI, FHI, and now FLI have been frustrating to me as their PR engagements have set the narrative and in some cases taken money that otherwise would have gone towards creating the technology that will finally allow us to end pain and suffering in the world. But instead funds and researcher attention are going into basic maths and philosophy that have questionable relevance to the technologies at hand.
However the precautionary vs proactionary description sheds a different light. If you think precautionary approaches are defensible, in spite of overwhelming evidence of their ineffectiveness, then I don’t think this is a debate worth having.
in some cases taken money that otherwise would have gone towards creating the technology that will finally allow us to end pain and suffering in the world.
If one looks as AI systems as including machine learning development, I think the estimate is something like a thousand times as many resources are spent on development as on safety research. I don’t think taking all of the safety money and putting it into ‘full speed ahead!’ would make much difference in time to AGI creation, but I do think transferring funds in the reverse direction may make a big difference for what that pain and suffering is replaced with.
I’ll go back to proactively building AI.
So, in my day job I do build AI systems, but not the AGI variety. I don’t have the interest in mathematical logic necessary to do the sort of work MIRI does. I’m just glad that they are doing it, and hopeful that it turns out to make a difference.
If one looks as AI systems as including machine learning development, I think the estimate is something like a thousand times as many resources are spent on development as on safety research.
Because everyone is working on machine learning, but machine learning is not AGI. AI is the engineering techniques for making programs that act intelligently. AGI is the process for taking those components and actually constructing something useful. It is the difference between computer science and a computer scientist. Machine learning is very useful for doing inference. But AGI is so much more than that, and there are very few resources being spent on AGI issues.
By the way, you should consider joining ##hplusroadmap on Freenode IRC. There’s a community of pragmatic engineers there working on a variety of transhumanist projects, and you AI experience would be valued. Say hi to maaku or kanzure when you join.
No I’m afraid you’re confusing me with someone else. I haven’t had the chance yet to see the fair city of Austin or attend AAAI, although I would like to. My current day job isn’t in the AI field so it would sadly be an unjustifiable expense.
To elaborate on the prior point, I have for some time engaged with not just yourself, but other MIRI-affiliated researchers as well as Nate and Luke before him. MIRI, FHI, and now FLI have been frustrating to me as their PR engagements have set the narrative and in some cases taken money that otherwise would have gone towards creating the technology that will finally allow us to end pain and suffering in the world. But instead funds and researcher attention are going into basic maths and philosophy that have questionable relevance to the technologies at hand.
However the precautionary vs proactionary description sheds a different light. If you think precautionary approaches are defensible, in spite of overwhelming evidence of their ineffectiveness, then I don’t think this is a debate worth having.
I’ll go back to proactively building AI.
If one looks as AI systems as including machine learning development, I think the estimate is something like a thousand times as many resources are spent on development as on safety research. I don’t think taking all of the safety money and putting it into ‘full speed ahead!’ would make much difference in time to AGI creation, but I do think transferring funds in the reverse direction may make a big difference for what that pain and suffering is replaced with.
So, in my day job I do build AI systems, but not the AGI variety. I don’t have the interest in mathematical logic necessary to do the sort of work MIRI does. I’m just glad that they are doing it, and hopeful that it turns out to make a difference.
Because everyone is working on machine learning, but machine learning is not AGI. AI is the engineering techniques for making programs that act intelligently. AGI is the process for taking those components and actually constructing something useful. It is the difference between computer science and a computer scientist. Machine learning is very useful for doing inference. But AGI is so much more than that, and there are very few resources being spent on AGI issues.
By the way, you should consider joining ##hplusroadmap on Freenode IRC. There’s a community of pragmatic engineers there working on a variety of transhumanist projects, and you AI experience would be valued. Say hi to maaku or kanzure when you join.