Risky Machines: Artificial Intelligence as a Danger to Mankind
spuckblase
I like the your non-fiction style a lot (don’t know your fictional stuff). I often get the impression you’re in total control of the material. Very thorough yet original, witty and humble. The exemplary research paper. Definitely more Luke than Yvain/Eliezer.
Navigating the LW rules is not intended to require precognition.
Well, it was required when (negative) karma for Main articles increased tenfold.
I’ll be there!
Do you still want to do this?
To be more specific:
I live in Germany, so timezone is GMT +1. My preferred time would be on a workday sometime after 8 pm (my time). Since I’m a german native speaker, and the AI has the harder job anyway, I offer: 50 dollars for you if you win, 10 dollars for me if I do.
I agree in large parts, but it seems likely that value drift plays a role, too.
Well, I’m somewhat sure (80%?) that no human could do it, but...let’s find out! Original terms are fine.
I’d bet up to fifty dollars!?
Ok, so who’s the other one living in Berlin?
If there are others who feel the same way, maybe we could set up some experiments where AI players are anonymous.
In that case, I’d like to participate as gatekeeper. I’m ready to put some money on the line.
BTW, I wonder if Clippy would want to play a human, too. I
Some have argued that a machine cannot reach human-level general intelligence, for example see Lucas (1961); Dreyfus (1972); Penrose (1994); Searle (1980); Block (1981). But Chalmers (2010) points out that their arguments are irrelevant: To reply to the Lucas, Penrose, and Dreyfus objections, we can note that nothing in the singularity idea requires that an AI be a classical computational system or even that it be a computational system at all. For example, Penrose (like Lucas) holds that the brain is not an algorithmic system in the ordinary sense, but he allows that it is a mechanical system that relies on certain nonalgorithmic quantum processes. Dreyfus holds that the brain is not a rule-following symbolic system, but he allows that it may nevertheless be a mechanical system that relies on subsymbolic processes (for example, connectionist processes). If so, then these arguments give us no reason to deny that we can build artificial systems that exploit the relevant nonalgorithmic quantum processes, or the relevant subsymbolic processes, and that thereby allow us to simulate the human brain. As for the Searle and Block objections, these rely on the thesis that even if a system duplicates our behaviour, it might be missing important ‘internal’ aspects of mentality: consciousness, understanding, intentionality, and so on… [But if] there are systems that produce apparently superintelligent outputs, then whether or not these systems are truly conscious or intelligent, they will have a transformative impact on the rest of the world. Chalmers (2010) summarizes two arguments suggesting that machines can reach human-level general intelligence:
The emulation argument (see section 7.3)
The evolutionary argument (see section 7.4)
This whole paragraph doesn’t seem to belong to section 1.11.
it is standard in a rational discourse to include and address opposing arguments, provided your audience includes anyone other than supporters already. At a minimum, one should state an objection and cite a discussion of it.
This is not a rational discourse but part of an FAQ, providing explanations/definitions. Counterarguments would be misplaced.
For those who read german or can infer the meaning: Philosopher Cristoph Fehige shows a way to embrace utilitarianism and dust specks.
“Literalness” is explained in sufficient detail to get a first idea of the connection to FAI, but “Superpower” is not.
going back to the 1956 Dartmouth conference on AI
maybe better (if this is good english): going back to the seminal 1956 Dartmouth conference on AI
There are many types of digital intelligence. To name just four:
Readers might like to know what the others are and why you chose those four.
Relevant? (A fake ad by renowned artist Katerina Jebb)
Die Forscher kombinieren Daten aus Informatik und psychologischen Studien. Ihr Ziel: Eine Not-to-do-Liste, die jedes Unternehmen bekommt, das an künstlicher Intelligenz arbeitet.
Rough translation:
The researchers combine data from computer science and psychological studies. Their goal: a not-to-do list, given to every organization working on artificial intelligence.
Using early IA techniques is probably risky in most cases. Commited altruists might have a general advantage here.