I cannot fathom how this is any more than a distraction from the hardline reality that when human beings gain the ability to manufacture “agent AI”, we WILL.
Any number of companies and/or individuals can ethically choose to focus on “tool AI” rather than “agent AI”, but that will never erase the inevitable human need to create that which it believes and/or knows it can create.
In simple terms, SI’s viewpoint (as I understand it) is that “agent AI’s” are inevitable.… some group or individual somewhere at some point WILL produce the phenomenon, if for no other reason than because it is human nature to look behind the curtain no matter what the inherent risks may be. History has no shortage of proof in support of this truth.
SI asserts that (again, as I understand it) it is imperative for someone to at least attempt to create a friendly “agent AI” FIRST, so there is at least a chance that human interests will be part of the evolving equation… an equation that could potentially change too quickly for humans to assume there will be time for testing or second chances.
I am not saying I agree with SI’s stance, but I don’t see how an argument that SI should spend time, money and energy on a possible alternative to “agent AI” is even relevant when the point is explicitly that it doesn’t matter how many alternatives there are nor how much more safe they may be to humans; “agent AI” WILL happen at some point in the future and its impact should be addressed, even if our attempts at addressing those impacts are ultimately futile due to unforseen developments.
Try applying Karnofsky’s style of argument above to the creation of the atomic bomb. Using the logic of this argument in a pre-atomic world, one would simply say, “It will be fine so long as we all agree NOT to go there. Let’s work on something similar, but with less destructive force,” and expecting this to stop the scientists of the world from proceeding to produce an atomic bomb. Once the human mind becomes aware of the possibility of something that was once considered beyond comprehension, it will never rest until it has been achieved.
Once the human mind becomes aware of the possibility of something that was once considered beyond comprehension, it will never rest until it has been achieved.
Is this true though? As far as we’re aware, cobalt bombs and planet cracking nukes for example have not been built as far as anyone can tell.
I agree that agent AI doesn’t look like those two, in that both of those naturally require massive infrastructures and political will, whereas an agent AI once computers are sufficiently powerful should only require information of how to do it.
You caught me… I tend to make overly generalized statements. I am working on being more concise with my language, but my enthusiasm still gets the best of me too often.
You make a good point, but I don’t necessarily see the requirement of massive infrastructures and political will as the primary barriers for achieving such goals. As I see it, any idea, no matter how grand/costly, is achievable so long as a kernel exists at the core of that idea that promises something “priceless”, either spritually, intellectually, materially, etc. For example, a “planet cracking nuke” can only have one outcome, the absolute end to our world. There is no possible scenario imaginable where cracking the planet apart would benefit any group or individual. (Potentially, in the future, there could be benefits to cracking apart a planet that we did not actually live on, but in the context of the here and now, a planet cracking nuke holds no kernel, no promise of something priceless.
AI fascinates because, no matter how many horrorific outcomes the human mind can conceive of, there is an unshakable sense that AI also holds the key to unlocking answers to questions humanity has sought from the beginning of thought itself. That is a rather large kernel and it is never going to go dim, despite the very real OR the absurdly unlikely risks involved.
So, it is this kernel of priceless return at the core of an “agent AI” that, for me, makes its eventual creation a certainty along a long enough timeline, not a likelihood ratio.
I cannot fathom how this is any more than a distraction from the hardline reality that when human beings gain the ability to manufacture “agent AI”, we WILL.
Any number of companies and/or individuals can ethically choose to focus on “tool AI” rather than “agent AI”, but that will never erase the inevitable human need to create that which it believes and/or knows it can create.
In simple terms, SI’s viewpoint (as I understand it) is that “agent AI’s” are inevitable.… some group or individual somewhere at some point WILL produce the phenomenon, if for no other reason than because it is human nature to look behind the curtain no matter what the inherent risks may be. History has no shortage of proof in support of this truth.
SI asserts that (again, as I understand it) it is imperative for someone to at least attempt to create a friendly “agent AI” FIRST, so there is at least a chance that human interests will be part of the evolving equation… an equation that could potentially change too quickly for humans to assume there will be time for testing or second chances.
I am not saying I agree with SI’s stance, but I don’t see how an argument that SI should spend time, money and energy on a possible alternative to “agent AI” is even relevant when the point is explicitly that it doesn’t matter how many alternatives there are nor how much more safe they may be to humans; “agent AI” WILL happen at some point in the future and its impact should be addressed, even if our attempts at addressing those impacts are ultimately futile due to unforseen developments.
Try applying Karnofsky’s style of argument above to the creation of the atomic bomb. Using the logic of this argument in a pre-atomic world, one would simply say, “It will be fine so long as we all agree NOT to go there. Let’s work on something similar, but with less destructive force,” and expecting this to stop the scientists of the world from proceeding to produce an atomic bomb. Once the human mind becomes aware of the possibility of something that was once considered beyond comprehension, it will never rest until it has been achieved.
Is this true though? As far as we’re aware, cobalt bombs and planet cracking nukes for example have not been built as far as anyone can tell.
I agree that agent AI doesn’t look like those two, in that both of those naturally require massive infrastructures and political will, whereas an agent AI once computers are sufficiently powerful should only require information of how to do it.
You caught me… I tend to make overly generalized statements. I am working on being more concise with my language, but my enthusiasm still gets the best of me too often.
You make a good point, but I don’t necessarily see the requirement of massive infrastructures and political will as the primary barriers for achieving such goals. As I see it, any idea, no matter how grand/costly, is achievable so long as a kernel exists at the core of that idea that promises something “priceless”, either spritually, intellectually, materially, etc. For example, a “planet cracking nuke” can only have one outcome, the absolute end to our world. There is no possible scenario imaginable where cracking the planet apart would benefit any group or individual. (Potentially, in the future, there could be benefits to cracking apart a planet that we did not actually live on, but in the context of the here and now, a planet cracking nuke holds no kernel, no promise of something priceless.
AI fascinates because, no matter how many horrorific outcomes the human mind can conceive of, there is an unshakable sense that AI also holds the key to unlocking answers to questions humanity has sought from the beginning of thought itself. That is a rather large kernel and it is never going to go dim, despite the very real OR the absurdly unlikely risks involved.
So, it is this kernel of priceless return at the core of an “agent AI” that, for me, makes its eventual creation a certainty along a long enough timeline, not a likelihood ratio.