Can you even put real world goals into machine? Say, you got 10^10 threads each 10^10 operations per second, 10^6 sensors, 10^3 actuators, is there an AI model that would actually have real world goals on that? The number of paperclips in the universe is not a realizable sensory input.
I suspect we don’t have a lot to worry about from an optimizing process stuck in the sensorimotor stage that never develops a grasp of object permanence. I apologize if I’m not interpreting you charitably enough, but if you have something coherent and substantive to say on this subject you should write five paragraphs with citations rather than two sentences with italicized wording for emphasis.
There isn’t a lot to cite to counter utter nonsense that incompetents (SIAI) promotes. There’s a lot of fundamentals to learn, though, to be able to not fall for such nonsense. Ultimately, defining a goal in the real world—which you can only access via sensors—is a very difficult problem distinct from maximization of well defined goals (which we can define within a simulator without solving former problem). You don’t want your paperclip maximizer obtaining eternal bliss by putting a paperclip into a mirror box. You don’t want it satisfying the paperclip drive with paperclip porn.
There are plenty of perfectly safe processes you can run on 10^10 threads with 10^10 operations per second—that’s strongly superhuman hardware—which will design you better microchips, for instance, or better code, at superhuman level. It has not even been articulated why exactly the AGI with real world goal like paperclip making would beat those processes and have upper hand vs the tools. The SIAI position is not even wrong. It is hundred percent misguided due to lack of understanding of simple fundamentals, and multitude of conflations of the concepts that are distinct to anyone in the field. That is a sad error state—the error that can not be recovered from.
Please accept my minor criticism as an offering of peace and helpfulness: you seen to be missing the trees for the forest. If something is genuinely safe then meticulous and clear thinking should indicate its safety to all right thinking people. If something is genuinely dangerous then meticulous and clear thinking should indicate its danger to all right thinking people.
You’re bringing up hypothetical scenarios (like automated chip design) to which the label “strongly super human” can sort of be applied (because so much computing machinery can be brought to bear), but not applied well. Strongly super human, to me, would describe a process that could compose poetry about the quality of the chip designs (just as human chip designers can), except “better than human”. It would mean that you could have a natural language conversation with with the chip design process (just as you can with human chip designers), and it would adaptively probe your communicative intent and then take that intent into account in its design efforts… except again, “better than human”.
After explaining your hypothetical scenario, you re-deployed the vague ascription of the label “strongly superhuman” to a context of political safety issues and asserted without warrant or evidence that SI is opposed to this thing that most readers can probably agree is probably safe (depending on details, which of course you didn’t supply in your hypothetical). Then you used the imagined dumb response of SI to your imaginary and poorly categorized scenario as evidence for them being dumb, and offered this imaginary scenario as evidence that it justifies writing off SI as a group full of irrecoverably incompetent people, and thus people not worth paying attention to.
Despite your use of a throw-away account, I’m going to assume you’re not just trying to assassinate the character of SI for reasons that benefit yourself for boring prosaic reasons like competition within a similar pool of potential donors or something. I’m going to assume that you’re trying to maximize positive outcomes for yourself and that your own happiness now is connected to your own happiness 50 years from now, and the prospects of other humans as well. Admittedly, you seem to have bugs in your ability to explicitly value things, so this is perhaps too generous...
For your own sake, and that of the world, please read and ponder this sequence. Imagine what it would be like to be wrong in the ways described. Imagine seeing people in the grip of political insanity of the sort described there as people who are able to improve, and people worthy of empathy even if they can’t or won’t improve, and imagine how you might help them stop being crazy with gentle advice or emotional rapport plus exposure to the evidence, or, you know, whatever you think would help people not be broken that way… and then think about how you might apply similar lessons recursively to yourself. I think it would help you, and I think it would help you send better consequential ripples out into the world, especially in the long run, like 3, 18, 50, and 500 months from now.
Please accept my minor criticism as an offering of peace and helpfulness: you seen to be missing the trees for the forest. If something is genuinely safe then meticulous and clear thinking should indicate its safety to all right thinking people. If something is genuinely dangerous then meticulous and clear thinking should indicate its danger to all right thinking people.
Eventually. That can take significant time, and a lot of work, which SIAI simply have not done.
The issue is that SIAI simply lacks sufficient qualification or talent for doing any sort of improvement to the survival of mankind, regardless of safety or unsafety of the artificial intelligences. (I am not saying they don’t have any talents. They are talented writers. I don’t see evidence of more technical talent though). Furthermore, the right thinking takes certain time, which is not so substantially shorter than the time to come up with the artificial intelligence itself.
The situation is even worse if I am to assume that artificial intelligences could be unsafe. Once we get closer to the point of creating such artificial intelligence, a valid inference of danger may arise—and such inference will need to be disseminated, and people will need to be convinced to take very drastic measures—and that will be marginalized by it’s similarity to SIAI whom advocate same actions without having anything resembling a valid inference. The impact of the SIAI is even worse if the risk exist.
When I imagine what it is to be this wrong—I imagine people who derive wireheaded happiness from their misguided effort, at everyone else’s expense. People with a fault that allows them to fall into happy death spiral.
And the burden of proof is not upon me. There exist no actual argument for the danger. There exist a sequence of letters that triggers fallacies and relies on the map compression issues in people whom don’t have sufficiently big map of the topic. (and this sequence of letters works best on people with least knowledge of the topic)
You say, “Have you ever seen an ape species evolving into a human species?” You insist on videotapes—on that particular proof.
And that particular proof is one we couldn’t possibly be expected to have on hand; it’s a form of evidence we couldn’t possibly be expected to be able to provide, even given that evolution is true.
Dog ate my homework excuse, in this particular case. Maximizing real world paperclips when you act upon sensory input is an incredibly tough problem, and it gets zillion times tougher still if you want that agent to start adding new hardware to itself.
edit:
Simultaneously, designing new hardware, or new weapons, or the like, within the simulation space, without proper AGI, is a solved problem. This real world paperclip maximizer has to be more inventive than the less general tools running on same hardware, to pose any danger.
The real world goals are ontologically basic to humans, and seem simple to people with little knowledge of the field. The fact is that doing things to reality based on the sensory input is a very tough extra problem separate from ‘cross domain optimization’. Even if you had some genie that solves any mathematically defined problems, it is still incredibly difficult to get it to paperclip maximize, even though you can use this genie to design anything.
Can you even put real world goals into machine? Say, you got 10^10 threads each 10^10 operations per second, 10^6 sensors, 10^3 actuators, is there an AI model that would actually have real world goals on that? The number of paperclips in the universe is not a realizable sensory input.
I suspect we don’t have a lot to worry about from an optimizing process stuck in the sensorimotor stage that never develops a grasp of object permanence. I apologize if I’m not interpreting you charitably enough, but if you have something coherent and substantive to say on this subject you should write five paragraphs with citations rather than two sentences with italicized wording for emphasis.
There isn’t a lot to cite to counter utter nonsense that incompetents (SIAI) promotes. There’s a lot of fundamentals to learn, though, to be able to not fall for such nonsense. Ultimately, defining a goal in the real world—which you can only access via sensors—is a very difficult problem distinct from maximization of well defined goals (which we can define within a simulator without solving former problem). You don’t want your paperclip maximizer obtaining eternal bliss by putting a paperclip into a mirror box. You don’t want it satisfying the paperclip drive with paperclip porn.
There are plenty of perfectly safe processes you can run on 10^10 threads with 10^10 operations per second—that’s strongly superhuman hardware—which will design you better microchips, for instance, or better code, at superhuman level. It has not even been articulated why exactly the AGI with real world goal like paperclip making would beat those processes and have upper hand vs the tools. The SIAI position is not even wrong. It is hundred percent misguided due to lack of understanding of simple fundamentals, and multitude of conflations of the concepts that are distinct to anyone in the field. That is a sad error state—the error that can not be recovered from.
Please accept my minor criticism as an offering of peace and helpfulness: you seen to be missing the trees for the forest. If something is genuinely safe then meticulous and clear thinking should indicate its safety to all right thinking people. If something is genuinely dangerous then meticulous and clear thinking should indicate its danger to all right thinking people.
You’re bringing up hypothetical scenarios (like automated chip design) to which the label “strongly super human” can sort of be applied (because so much computing machinery can be brought to bear), but not applied well. Strongly super human, to me, would describe a process that could compose poetry about the quality of the chip designs (just as human chip designers can), except “better than human”. It would mean that you could have a natural language conversation with with the chip design process (just as you can with human chip designers), and it would adaptively probe your communicative intent and then take that intent into account in its design efforts… except again, “better than human”.
After explaining your hypothetical scenario, you re-deployed the vague ascription of the label “strongly superhuman” to a context of political safety issues and asserted without warrant or evidence that SI is opposed to this thing that most readers can probably agree is probably safe (depending on details, which of course you didn’t supply in your hypothetical). Then you used the imagined dumb response of SI to your imaginary and poorly categorized scenario as evidence for them being dumb, and offered this imaginary scenario as evidence that it justifies writing off SI as a group full of irrecoverably incompetent people, and thus people not worth paying attention to.
Despite your use of a throw-away account, I’m going to assume you’re not just trying to assassinate the character of SI for reasons that benefit yourself for boring prosaic reasons like competition within a similar pool of potential donors or something. I’m going to assume that you’re trying to maximize positive outcomes for yourself and that your own happiness now is connected to your own happiness 50 years from now, and the prospects of other humans as well. Admittedly, you seem to have bugs in your ability to explicitly value things, so this is perhaps too generous...
For your own sake, and that of the world, please read and ponder this sequence. Imagine what it would be like to be wrong in the ways described. Imagine seeing people in the grip of political insanity of the sort described there as people who are able to improve, and people worthy of empathy even if they can’t or won’t improve, and imagine how you might help them stop being crazy with gentle advice or emotional rapport plus exposure to the evidence, or, you know, whatever you think would help people not be broken that way… and then think about how you might apply similar lessons recursively to yourself. I think it would help you, and I think it would help you send better consequential ripples out into the world, especially in the long run, like 3, 18, 50, and 500 months from now.
Eventually. That can take significant time, and a lot of work, which SIAI simply have not done.
The issue is that SIAI simply lacks sufficient qualification or talent for doing any sort of improvement to the survival of mankind, regardless of safety or unsafety of the artificial intelligences. (I am not saying they don’t have any talents. They are talented writers. I don’t see evidence of more technical talent though). Furthermore, the right thinking takes certain time, which is not so substantially shorter than the time to come up with the artificial intelligence itself.
The situation is even worse if I am to assume that artificial intelligences could be unsafe. Once we get closer to the point of creating such artificial intelligence, a valid inference of danger may arise—and such inference will need to be disseminated, and people will need to be convinced to take very drastic measures—and that will be marginalized by it’s similarity to SIAI whom advocate same actions without having anything resembling a valid inference. The impact of the SIAI is even worse if the risk exist.
When I imagine what it is to be this wrong—I imagine people who derive wireheaded happiness from their misguided effort, at everyone else’s expense. People with a fault that allows them to fall into happy death spiral.
And the burden of proof is not upon me. There exist no actual argument for the danger. There exist a sequence of letters that triggers fallacies and relies on the map compression issues in people whom don’t have sufficiently big map of the topic. (and this sequence of letters works best on people with least knowledge of the topic)
-- You’re Entitled to Arguments, But Not (That Particular) Proof.
Nevermind that formally describing a paperclip maximizer would be dangerous and increase existential risk.
EDIT: Please also consider this a response to this comment as well.
Dog ate my homework excuse, in this particular case. Maximizing real world paperclips when you act upon sensory input is an incredibly tough problem, and it gets zillion times tougher still if you want that agent to start adding new hardware to itself.
edit:
Simultaneously, designing new hardware, or new weapons, or the like, within the simulation space, without proper AGI, is a solved problem. This real world paperclip maximizer has to be more inventive than the less general tools running on same hardware, to pose any danger.
The real world goals are ontologically basic to humans, and seem simple to people with little knowledge of the field. The fact is that doing things to reality based on the sensory input is a very tough extra problem separate from ‘cross domain optimization’. Even if you had some genie that solves any mathematically defined problems, it is still incredibly difficult to get it to paperclip maximize, even though you can use this genie to design anything.