The first and selfish answer (probably shared by countless others would be “I’m interested in working on that.”
Am I qualified? Maybe; maybe not. I suspect I won’t know what makes an effective AI safety planner until somebody actually starts to do it.
I make this observation. It looks to me that the potential emergence of AGI has two fronts. The first is raw scientific development. Programmers, engineers and cognitive scientists just “doing their thing;” understanding our world by replicating and modifying parts of it. The second is the one that a vast majority of people can already see; specific-task AI devices getting stronger/faster and better connected. If it cannot be done today, within months a person can talk to the air around them, order a cheeseburger that will be cooked, assembled, delivered, and paid for completely by automated, unconscious agents. Who am I to say that with enough forward development and integration of such automated systems, we would not see emergent automated behavior, just as fantastic or dangerous as a “thinking” machine might display.
Such a watchdog group can be potentially useful already, if they allow some economic skill-power to assist with current technology issues (i.e. workplace automation, and the unavoidable employment changes that causes.)
This is a long winded “I agree.” We should not wait for someone else to organize our protective stance from the agents we build specifically to be better at tasks than ourselves, be they specific or general. Multiple, experienced folk should always be asking “What is the driving goal of this AGI? What are it’s success/failure conditions? What information does it have access too? Where are the means to interrupt it, if it finds an unfriendly solution to its hurdles?
Hi Brian, thanks for your reply! I think we would not need very special qualifications for this, it’s more a matter of reading up on the main status of AI and safe AI, cite the main conclusions from academia and make sure they get presented well to both policy makers and normal people. You say you’d expect countless others to want to work on this too, but I didn’t find them yet. I’m still hopeful they may exist somewhere, and if you find people already doing this, I’d love to get in contact with them. Else, we should start ourselves.
Interesting observation! I’m thinking that your second front is especially interesting/worrying where AI improvement tasks are automated. For a positive feedback loop to occur, making AI get smarter very fast, many imagine an AGI is necessary. However, I’m thinking, what’s improving AI now? Which skills are required? I think it’s partially hardware improvement: academia and industry working together to keep Moore’s law going. The other part is software/algorithm improvements, also done by academics and companies such as Deep Mind etc. So if the tasks of those researchers would be automated, that would be the point at which the singularity could take off. Their jobs tend to be analytical and focused on a single task, more than generically human and social, which I guess means that AI would find them easier. That in turn means the singularity (there should be a less scifi name for this) could happen sooner than AGI, if policy doesn’t intervene. So also a long winded I agree.
So how should we go about organizing this, if no one is doing it yet? Any thoughts?
Thanks again for your reply, as I said above it’s heartening that there are people out there who are on more or less the same page!
The first and selfish answer (probably shared by countless others would be “I’m interested in working on that.”
Am I qualified? Maybe; maybe not. I suspect I won’t know what makes an effective AI safety planner until somebody actually starts to do it.
I make this observation. It looks to me that the potential emergence of AGI has two fronts. The first is raw scientific development. Programmers, engineers and cognitive scientists just “doing their thing;” understanding our world by replicating and modifying parts of it. The second is the one that a vast majority of people can already see; specific-task AI devices getting stronger/faster and better connected. If it cannot be done today, within months a person can talk to the air around them, order a cheeseburger that will be cooked, assembled, delivered, and paid for completely by automated, unconscious agents. Who am I to say that with enough forward development and integration of such automated systems, we would not see emergent automated behavior, just as fantastic or dangerous as a “thinking” machine might display.
Such a watchdog group can be potentially useful already, if they allow some economic skill-power to assist with current technology issues (i.e. workplace automation, and the unavoidable employment changes that causes.)
This is a long winded “I agree.” We should not wait for someone else to organize our protective stance from the agents we build specifically to be better at tasks than ourselves, be they specific or general. Multiple, experienced folk should always be asking “What is the driving goal of this AGI? What are it’s success/failure conditions? What information does it have access too? Where are the means to interrupt it, if it finds an unfriendly solution to its hurdles?
Hi Brian, thanks for your reply! I think we would not need very special qualifications for this, it’s more a matter of reading up on the main status of AI and safe AI, cite the main conclusions from academia and make sure they get presented well to both policy makers and normal people. You say you’d expect countless others to want to work on this too, but I didn’t find them yet. I’m still hopeful they may exist somewhere, and if you find people already doing this, I’d love to get in contact with them. Else, we should start ourselves.
Interesting observation! I’m thinking that your second front is especially interesting/worrying where AI improvement tasks are automated. For a positive feedback loop to occur, making AI get smarter very fast, many imagine an AGI is necessary. However, I’m thinking, what’s improving AI now? Which skills are required? I think it’s partially hardware improvement: academia and industry working together to keep Moore’s law going. The other part is software/algorithm improvements, also done by academics and companies such as Deep Mind etc. So if the tasks of those researchers would be automated, that would be the point at which the singularity could take off. Their jobs tend to be analytical and focused on a single task, more than generically human and social, which I guess means that AI would find them easier. That in turn means the singularity (there should be a less scifi name for this) could happen sooner than AGI, if policy doesn’t intervene. So also a long winded I agree.
So how should we go about organizing this, if no one is doing it yet? Any thoughts?
Thanks again for your reply, as I said above it’s heartening that there are people out there who are on more or less the same page!