Are AI scientists that you know in a pursuit for AGI or more powerful narrow AI systems?
As someone who is knew to this space I’m trying to simply wrap my head around the desire to create AGI, which could be intensely frightening and dangerous to the developer of such system.
I mean not that many people are hell bent on finding the next big virus or developing the next weapon so I don’t see why AGI is as inevitable as you say it is. Thus I suppose developers of these systems must have a firm belief there are very little dangers attached to developing a system some 2-5x general human intelligence.
If you happen to be one of these developers could you perhaps share with me the thesis behind why you feel this way or at least the studies, papers, etc that gives you assurance what you’re doing is largely beneficial to society as a whole and safe.
There are a lot of groups pursuing AGI. Some claiming that they are doing so with the goal of benefiting humanity, some simply in pursuit of profit and power. Indeed, the actors I personally am most concerned about are those who are relatively selfish and immoral as well as self-confident and incautious, and sufficiently competent to at least utilize and modify code published by researchers. Those who think they can dodge or externalize-to-society the negative consequences and reap the benefits, who don’t take the existential risk stuff seriously. You know what I mean. The L33T |-|ACKZ0R demographic.
I don’t personally work in AI. But Open AI for example states clearly in its own goals that they aim at building AGI, and Sam Altman wrote a whole post called “Moore’s Law for Everything” in which he outlines his vision for an AGI future. I consider it naïve nonsense, personally, but the drive seems to be simply the idea of a utopian world of abundance and technological development going faster and faster as AGI makes itself smarter.
EDIT: sorry, didn’t realise you weren’t answering to me, so my answer doesn’t make a lot of sense. Still, gonna leave it here.
Are AI scientists that you know in a pursuit for AGI or more powerful narrow AI systems?
As someone who is knew to this space I’m trying to simply wrap my head around the desire to create AGI, which could be intensely frightening and dangerous to the developer of such system.
I mean not that many people are hell bent on finding the next big virus or developing the next weapon so I don’t see why AGI is as inevitable as you say it is. Thus I suppose developers of these systems must have a firm belief there are very little dangers attached to developing a system some 2-5x general human intelligence.
If you happen to be one of these developers could you perhaps share with me the thesis behind why you feel this way or at least the studies, papers, etc that gives you assurance what you’re doing is largely beneficial to society as a whole and safe.
There are a lot of groups pursuing AGI. Some claiming that they are doing so with the goal of benefiting humanity, some simply in pursuit of profit and power. Indeed, the actors I personally am most concerned about are those who are relatively selfish and immoral as well as self-confident and incautious, and sufficiently competent to at least utilize and modify code published by researchers. Those who think they can dodge or externalize-to-society the negative consequences and reap the benefits, who don’t take the existential risk stuff seriously. You know what I mean. The L33T |-|ACKZ0R demographic.
I don’t personally work in AI. But Open AI for example states clearly in its own goals that they aim at building AGI, and Sam Altman wrote a whole post called “Moore’s Law for Everything” in which he outlines his vision for an AGI future. I consider it naïve nonsense, personally, but the drive seems to be simply the idea of a utopian world of abundance and technological development going faster and faster as AGI makes itself smarter.
EDIT: sorry, didn’t realise you weren’t answering to me, so my answer doesn’t make a lot of sense. Still, gonna leave it here.