That lastr one got through, so let’s try : Random malfunction ?
Chris
Trying to work out the biases of the new antispam filter. Frequency of comments from same individual in same thread ?
Just had a response to Goplat rejected as spam. Wonder what the biases built in to the new antispam filter are ?
Goplat, can’t answer for Caledonian, but as I’m pretty sad & pathetic myself, I’ll take a stab. The unborn represent variety and potentiality. More of the same represents sterility. Sure I’d like to live 500 productive & happy years, but am in my better moments conscious that with present biotechnology this is unlikely. With SIAI improved biotechnology who knows ? However, my totally uninformed intuition is that however superproductive & longlived the ultra-new curly-wurly chromosomes that my friendly neighbourhood SIAI will give me are, they would do better (in accordance with their interest) endowing them on the young of the species. Your argument that we now are happy living 80 years where our ancestors were lucky to make 40 is pertinent, but adding years after 40 still doesn’t increase the productive lifespan of a mathematician. Jesus died at 30 (or was it 33 ?). Mother Theresa was doing productive caring work into advanced old age. So perhaps youth = creativity, age = caring. A ‘Self Improving’ AI would surely privilege the 1st option. For better or for worse. Personally I’m for balance, and am all for the increase of life expectancy at a rate which is compatible with human capacity to adapt. I wrote a piece on the Impossibility of a ‘Friendly’ SIAI which I may inflict on the world someday.
Couldn’t resist adding a complaint about the abuse of the term ‘guru’ as a term of …abuse. It represents in fact an exponent of a perfectly respectable form of expertise transmission in non-rational domains. Drift into abuse of authority by such an exponent is perhaps more likely because the method relies on authority rather than argument, but that doesn’t mean that the concept is invalid, or indeed that there is any other method possible in those domains.
What about the Guru who wrote ‘Why work towards the Singularity’ ? It is a text with a distinctly Messianic feel. Or, to be more generous, a Promethean feel. While it is true that Hom Sap has a nasty itch to create anything that can be created, regardless, thre’s no need for such pseudo valuations as the following : “If there’s a Singularity effort that has a strong vision of this future and supports projects that explicitly focus on transhuman technologies such as brain-computer interfaces and self-improving Artificial Intelligence, then humanity may succeed in making the transition to this future a few years earlier, saving millions of people who would have otherwise died. Around the world, the planetary death rate is around fifty-five million people per year (UN statistics) − 150,000 lives per day, 6,000 lives per hour. These deaths are not just premature but perhaps actually unnecessary. At the very least, the amount of lost lifespan is far more than modern statistics would suggest.” Who says that continuing the lives of us dull old farts, to the inevitable detriment of the unborn, has any positive value ? I’d say that’s monstruous. The transhuman AI may be an unavoidable consequence of our Luciferian inclination to meddle. That doesn’t mean it’s a cause. Any chance of it becoming a cult ?
Robin : “For example, “ostracizing anyone who dared contradict her” would seemingly apply to a great many, perhaps the majority, of ordinary human organizations.” : Yes, but there is a difference between ostracizing = damning to the nethermost pits of hell with no hope of salvation and ostracizing = delaying your next pay increase by a couple of months. i.e., the cult-dom-ness is contingent on the existential nature of the ostracization.
EY, thanks for the link in this post to your Global Risks paper. It addresses in passing something that had puzzled me : how would an AI acquire hands ? On Ayn Rand : the concept of purity seems to me central in cult formation, more so than that of absoluteness. See, for instance, the deviations of the Self Realisation Foundation in handling Yogananda’s legacy. Or, for that matter, General Jack D Ripper in Dr Strangelove. So, let your knowledge and wisdom increase, but let them not be pure. Amen.
“an “environmentalist” is not someone who believes in the existence of the environment.” Non sequitur. An environmentalist is someone who believes in the value of the environment. sloppy, sloppy, sloppy.......
It’s amusing to see ‘criterion of goodness’ as a simile for ‘criterion of correctness’. The Inquisition believed they were both ‘correct’ and ‘good’. In torturing you, they were saving your soul, which was, for them, the ultimate in Utility. So, in calculating utility, beware of your assumptions.
Having only just caught up with the Paris Hilton thread I’ve only just realised what Eliezar is trying to do and am suitably humbled. However, I choose the lottery thread to point up the unimaginable orders of magnitude of difference between the significance of trying to devise an optimal morality for the engineered intelligence which will supercede us (and yes, I do know that the etymology of supercede does include our death), and the significance me and my better half buying a lotto ticket. ‘Wasted hope’ implies that we are to some extent free agents. Before even going there, Eliezar, you need to define your position on free will vs determinism & Chalmers vs Dennett. No doubt you have, in which case please excuse me and point me there. To answer the lotto question, just look to how your post singularity AI will handle frustration, disappointment, and low self esteem. I don’t have the math but I do have the questions. Our ability to handle our own dysfunctions is not even in its infancy. Our psychological models are a shambles (just look at the Tree of Knowledge as a smile- or tear- inducing example of how not to get there). Our therapeutic methodologies are at the shamanism 1.0.1 stage . And yet we hope to legislate for the intelligence that will replace us ? Call that a bias, a triumph of hope over experience ! Next step, the Paris Hilton discussion on values was suitably learned, but however high you get in meta- meta- meta- values theory, there is an irreducible ‘my values are what seems right to me’. Your post-singularity IA will have its own, unless it is very severely constrained (but then I guess it wouldn’t be post-singularity, in which case we should all go to the beach and shut up, because nothing we could do or say will make any difference). That’s why I like Ian McDonald’s book, it focusses on that polarity. BTW, I agree with the poster who postulated creepiness as a value. Cryogenics is definitely creepy. Also, please get in touch when you’ve produced an AI program to match the smile on my wife’s face when she comes in with a Lotto ticket and says ‘this is for you’, and the effect it has on me even though I know all the probability statistics. Tara.
I strongly encourage any AI worker who hasn’t already done so to read Ian McDonald’s ‘River of Gods’. He’s pretty positive (in timescale terms...) on AI, his answer to the question “How long will it be until we have human-level AI?” is 2047 AD, and it’s a totally gob-smacking, brilliant, read.
Nick, sure, heroically not doing something will never grab the attention in the way that doing something does. Today, approximately 1,000,000 cars in Paris were not burned. So what makes the headlines ?
Statistics is actually fun, as the notion of probability is so non-intuitive. There’s a 1 in 6 chance of throwing a deuce. What does that mean in the real world ? Well, if I throw the die 6 times, it should come up once ? euh no… Well if I throw 100 sequences of 6 throws I can predict the number of times the deuce will show up ? euh, no.… Well, if I throw 1000 runs of 100 sequences of 6 throws...… sorry, you still don’t know one damn thing about what the result will be. So what does probability mean ? It’s great ! One of life’s rich prizes is to watch someone making a prediction on a particular instance based on statistical reasoning.
BTW, significant data was withheld in the examples given : a) how many dips do you get at the jellybeans ? Do the red ones taste better ? What is their market value with the current weak dollar ? b) 10,000 people overall or 10,000 infected people ? Degree of infectiousness of the disease ? But that’s what the affect heuristic is for : taking decisions in situations of incomplete data. 150 people is a single bounded set, 98% of x people sounds as though it just might be a replicable set. Go for it.
Ha, Spock vs McCoy. I think Kirk’s position was that it’s the affect heuristic that makes us warm, cuddly, and human, data processors, even if it can be faulted in some artificial situations.. This ties in with the other thread about how far we look down possible chains of results in deciding on an action. We’re wired to look to proximal results with high affect, and I’m all for it.
Hi Richard, any relation to the punch card guy ? IBM paid my salary for 35 years. Someone in one of these threads got squashed flatter than a pancake for supposedly confusing maps and territories, so let’s be careful with models of reality. When I say ‘dependant on usefulness’, I just meant that the selectivity and level of detail of the map would depend on what you want to use it for. Not much point in going to the doctor and telling him the ‘truth’ about my finger, which would involve energy fields and dark matter, if what I want from him is a sticking plaster. Lovely article here on what the Romans thought was important in a map, and why it doesn’t look like one that we’d find useful, or ‘truthful’, today. http://news.bbc.co.uk/2/hi/europe/7113810.stm
I’m left in ‘awe and wonder’ at the literalism of the debates going on here. The OT is a bunch of mythology and folklore, so, what else is new ? The NT is a heterogenous collection of Roman imperial propaganda, Jewish apocalyptic propaganda, and perhaps, some vague recollections of what a good man once said. So ? What does any of that have to do with logical categories ? Eliezar is guilty, as Anna pointed out, of mixing up the crudest OT literalism with any and every other level of religious experience and expression. I understand that, he was traumatised at age 5. Perhaps that also explains the violence of his reaction to Anna. The only interesting debate on the ‘singulsrity’ of religion is exactly the same debate as that on the ‘singularity’ of consciousness. Either there is a ‘watcher’, in the void, behind all thought and image, which constitutes the irreducible core of my consciousness , as for instance Daniel Dennett would not agree, or there is not. If there is, then there is a basis for religion. If there is not, then there is a basis for saying that we will never know final causes nor final intents, and what the hell.
‘Scuse me, but isn’t this trivial ? Both pragmatic and epistemic instances depend on available information. If you drive to Carrefour, you need some information to tell you they’re out of chocolate. And to see the ‘Out of chocolate’ sign, you need to have driven to Carrefour. So, dear friends, both instances depend on (a) purpose (b) information relative to the achievability of the purpose. Unless of course your purpose is ‘enculage des mouches’, in which case, don’t go to Carrefour. Go to Tesco. PS Truth does not reduce to usefulness. Truth is a relative concept dependant on usefulness. I asked Schrodinger’s cat to contribute but she was busy with her Whiskas.
Robin, transmission of expertise in non-rational domains has to rely on authority rather than argument, so is more susceptible to slide into abuse of authority than transmission in rational domains. The original post here is strange in that it supposes such a type of transmission in the field of rational teaching. The definition of cult in the field of master / disciple relationships has to start with an examination of whether authority is being abused by, for example, being exercised in areas unrelated to the teaching. Don’t take sweets from philosophers.