You say, “Have you ever seen an ape species evolving into a human species?” You insist on videotapes—on that particular proof.
And that particular proof is one we couldn’t possibly be expected to have on hand; it’s a form of evidence we couldn’t possibly be expected to be able to provide, even given that evolution is true.
Dog ate my homework excuse, in this particular case. Maximizing real world paperclips when you act upon sensory input is an incredibly tough problem, and it gets zillion times tougher still if you want that agent to start adding new hardware to itself.
edit:
Simultaneously, designing new hardware, or new weapons, or the like, within the simulation space, without proper AGI, is a solved problem. This real world paperclip maximizer has to be more inventive than the less general tools running on same hardware, to pose any danger.
The real world goals are ontologically basic to humans, and seem simple to people with little knowledge of the field. The fact is that doing things to reality based on the sensory input is a very tough extra problem separate from ‘cross domain optimization’. Even if you had some genie that solves any mathematically defined problems, it is still incredibly difficult to get it to paperclip maximize, even though you can use this genie to design anything.
-- You’re Entitled to Arguments, But Not (That Particular) Proof.
Nevermind that formally describing a paperclip maximizer would be dangerous and increase existential risk.
EDIT: Please also consider this a response to this comment as well.
Dog ate my homework excuse, in this particular case. Maximizing real world paperclips when you act upon sensory input is an incredibly tough problem, and it gets zillion times tougher still if you want that agent to start adding new hardware to itself.
edit:
Simultaneously, designing new hardware, or new weapons, or the like, within the simulation space, without proper AGI, is a solved problem. This real world paperclip maximizer has to be more inventive than the less general tools running on same hardware, to pose any danger.
The real world goals are ontologically basic to humans, and seem simple to people with little knowledge of the field. The fact is that doing things to reality based on the sensory input is a very tough extra problem separate from ‘cross domain optimization’. Even if you had some genie that solves any mathematically defined problems, it is still incredibly difficult to get it to paperclip maximize, even though you can use this genie to design anything.