Having only just caught up with the Paris Hilton thread I’ve only just realised what Eliezar is trying to do and am suitably humbled. However, I choose the lottery thread to point up the unimaginable orders of magnitude of difference between the significance of trying to devise an optimal morality for the engineered intelligence which will supercede us (and yes, I do know that the etymology of supercede does include our death), and the significance me and my better half buying a lotto ticket.
‘Wasted hope’ implies that we are to some extent free agents. Before even going there, Eliezar, you need to define your position on free will vs determinism & Chalmers vs Dennett. No doubt you have, in which case please excuse me and point me there.
To answer the lotto question, just look to how your post singularity AI will handle frustration, disappointment, and low self esteem. I don’t have the math but I do have the questions.
Our ability to handle our own dysfunctions is not even in its infancy. Our psychological models are a shambles (just look at the Tree of Knowledge as a smile- or tear- inducing example of how not to get there). Our therapeutic methodologies are at the shamanism 1.0.1 stage . And yet we hope to legislate for the intelligence that will replace us ? Call that a bias, a triumph of hope over experience !
Next step, the Paris Hilton discussion on values was suitably learned, but however high you get in meta- meta- meta- values theory, there is an irreducible ‘my values are what seems right to me’. Your post-singularity IA will have its own, unless it is very severely constrained (but then I guess it wouldn’t be post-singularity, in which case we should all go to the beach and shut up, because nothing we could do or say will make any difference). That’s why I like Ian McDonald’s book, it focusses on that polarity.
BTW, I agree with the poster who postulated creepiness as a value. Cryogenics is definitely creepy. Also, please get in touch when you’ve produced an AI program to match the smile on my wife’s face when she comes in with a Lotto ticket and says ‘this is for you’, and the effect it has on me even though I know all the probability statistics. Tara.
Having only just caught up with the Paris Hilton thread I’ve only just realised what Eliezar is trying to do and am suitably humbled. However, I choose the lottery thread to point up the unimaginable orders of magnitude of difference between the significance of trying to devise an optimal morality for the engineered intelligence which will supercede us (and yes, I do know that the etymology of supercede does include our death), and the significance me and my better half buying a lotto ticket. ‘Wasted hope’ implies that we are to some extent free agents. Before even going there, Eliezar, you need to define your position on free will vs determinism & Chalmers vs Dennett. No doubt you have, in which case please excuse me and point me there. To answer the lotto question, just look to how your post singularity AI will handle frustration, disappointment, and low self esteem. I don’t have the math but I do have the questions. Our ability to handle our own dysfunctions is not even in its infancy. Our psychological models are a shambles (just look at the Tree of Knowledge as a smile- or tear- inducing example of how not to get there). Our therapeutic methodologies are at the shamanism 1.0.1 stage . And yet we hope to legislate for the intelligence that will replace us ? Call that a bias, a triumph of hope over experience ! Next step, the Paris Hilton discussion on values was suitably learned, but however high you get in meta- meta- meta- values theory, there is an irreducible ‘my values are what seems right to me’. Your post-singularity IA will have its own, unless it is very severely constrained (but then I guess it wouldn’t be post-singularity, in which case we should all go to the beach and shut up, because nothing we could do or say will make any difference). That’s why I like Ian McDonald’s book, it focusses on that polarity. BTW, I agree with the poster who postulated creepiness as a value. Cryogenics is definitely creepy. Also, please get in touch when you’ve produced an AI program to match the smile on my wife’s face when she comes in with a Lotto ticket and says ‘this is for you’, and the effect it has on me even though I know all the probability statistics. Tara.