Another question related to the SIAI, regarding advanced nanotechnology, is that if without advanced nanotechnology superhuman AI is at all possible.
Um… yes? Superhuman is a low bar and, more importantly, a completely arbitrary bar.
I’m not sure what you are trying to say here. What I said was simply that if you say that some sort of particle collider is going to destroy the world with a probability of 75% if run, I’ll ask you for how you came up with these estimations. I’ll ask you to provide more than a consistent internal logic but some evidence-based prior.
Evidence based? By which you seem to mean ‘some sort of experiment’? Who would be insane enough to experiment with destroying the world? This situation is exactly where you must understand that evidence is not limited to ‘reference to historical experimental outcomes’. You actually will need to look at ‘consistent internal logic’… just make sure the consistent internal logic is well grounded on known physics.
What if someone came along making coherent arguments about some existential risk about how some sort of particle collider might destroy the universe? I would ask what the experts think who are not associated with the person who makes the claims. What would you think if he simply said, “do you have better data than me”? Or, “I have a bunch of good arguments”?
And that, well, that is actually a reasonable point. You have been given some links (regarding human behavior) that are good answer to the question but it is nevertheless non-trivial. Unfortunately now you are actually going to have to do the work and read them.
...… just make sure the consistent internal logic is well grounded on known physics.
Is it? That smarter(faster)-than-human intelligence is possible is well grounded on known physics? If that is the case, how does it follow that intelligence can be applied to itself effectively, to the extent that one could realistically talk about “explosive” recursive self-improvement?
That smarter(faster)-than-human intelligence is possible is well grounded on known physics?
Some still seem sceptical—and you probably also need some math, compsci and philosophy to best understand the case for superhuman intelligence being possible.
Not only is there evidence that smarter than human intelligence is possible it is something that should be trivial given a vaguely sane reductionist model. Moreover you specifically have been given evidence on previous occasions when you have asked similar questions.
What you have not been given and what are not available are empirical observations of smarter than human intelligences existing now. That is evidence to which you would not be entitled.
Moreover you specifically have been given evidence on previous occasions when you >have asked similar questions.
Please provide a link to this effect? (Going off topic, I would suggest that a “show all threads with one or more comments by users X, Y and Z” or “show conversations between users X and Y” feature on LW might be useful.)
Moreover you specifically have been given evidence on previous occasions when you >have asked similar questions.
Please provide such a link. (Going off-topic, I additionally suggest that a “show all conversations between user X and user Y” feature on Less Wrong might be useful.)
It is currently not possible for me to either link or quote. I do not own a computer in this hemisphere and my android does not seem to have keys for brackets or greater than symbols. workarounds welcome.
The solution varies by model, but on mine, alt-shift-letter physical key combinations do special characters that aren’t labelled. You can also use the on-screen keyboard, and there are more onscreen keyboards available for download if the one you’re currently using is badly broken.
Uhm...yes? It’s just something I would expect to be integrated into any probability estimates of suspected risks. More here.
Who would be insane enough to experiment with destroying the world?
Check the point that you said is a reasonable one. And I have read a lot without coming across any evidence yet. I do expect an organisation like the SIAI to have detailed references and summaries about their decision procedures and probability estimations to be transparently available and not hidden beneath thousands of posts and comments. “It’s somewhere in there, line 10020035, +/- a million lines....” is not transparency! That is, an organisation who’s conerned with something taking over the universe and asks for your money. And organisation I’m told of which some members get nightmares just reading about evil AI...
Um… yes? Superhuman is a low bar and, more importantly, a completely arbitrary bar.
Evidence based? By which you seem to mean ‘some sort of experiment’? Who would be insane enough to experiment with destroying the world? This situation is exactly where you must understand that evidence is not limited to ‘reference to historical experimental outcomes’. You actually will need to look at ‘consistent internal logic’… just make sure the consistent internal logic is well grounded on known physics.
And that, well, that is actually a reasonable point. You have been given some links (regarding human behavior) that are good answer to the question but it is nevertheless non-trivial. Unfortunately now you are actually going to have to do the work and read them.
Is it? That smarter(faster)-than-human intelligence is possible is well grounded on known physics? If that is the case, how does it follow that intelligence can be applied to itself effectively, to the extent that one could realistically talk about “explosive” recursive self-improvement?
Some still seem sceptical—and you probably also need some math, compsci and philosophy to best understand the case for superhuman intelligence being possible.
Not only is there evidence that smarter than human intelligence is possible it is something that should be trivial given a vaguely sane reductionist model. Moreover you specifically have been given evidence on previous occasions when you have asked similar questions.
What you have not been given and what are not available are empirical observations of smarter than human intelligences existing now. That is evidence to which you would not be entitled.
Please provide a link to this effect? (Going off topic, I would suggest that a “show all threads with one or more comments by users X, Y and Z” or “show conversations between users X and Y” feature on LW might be useful.)
(First reply below)
Please provide such a link. (Going off-topic, I additionally suggest that a “show all conversations between user X and user Y” feature on Less Wrong might be useful.)
It is currently not possible for me to either link or quote. I do not own a computer in this hemisphere and my android does not seem to have keys for brackets or greater than symbols. workarounds welcome.
The solution varies by model, but on mine, alt-shift-letter physical key combinations do special characters that aren’t labelled. You can also use the on-screen keyboard, and there are more onscreen keyboards available for download if the one you’re currently using is badly broken.
SwiftKey x beta Brilliant!
OK, can I have my quote(s) now? It might just be hidden somewhere in the comments to this very article.
Can you copy and paste characters?
Uhm...yes? It’s just something I would expect to be integrated into any probability estimates of suspected risks. More here.
Check the point that you said is a reasonable one. And I have read a lot without coming across any evidence yet. I do expect an organisation like the SIAI to have detailed references and summaries about their decision procedures and probability estimations to be transparently available and not hidden beneath thousands of posts and comments. “It’s somewhere in there, line 10020035, +/- a million lines....” is not transparency! That is, an organisation who’s conerned with something taking over the universe and asks for your money. And organisation I’m told of which some members get nightmares just reading about evil AI...