I don’t think informal arguments can convince people on topics where they have made up their minds. You need either a proof or empirical evidence.
Show us a self-improving something. Show us that it either does or doesn’t self-improve in surprising and alarming ways. Even if it’s self-improving only in very narrow limited ways, that would be illuminating.
Explain how various arguments would apply to real existing AI-ish systems, like self-driving cars, machine translation, Watson, or a web search engine.
Give a proof that some things can or can’t be done. There is a rich literature on uncomputable and intractable problems. We do know how to prove properties of computer programs I am surprised at how little it gets mentioned on LW.
I’ve posted on that also. For example the predictions are fighting against butterfly effect, and at best double in time when you square the computing power (and that’s given unlimited knowledge of initial state!). It’s pretty well demonstrable on the weather, for instance, but of course rationalizers can always argue that it ‘wasn’t demonstrated’ for some more complex cases. There are things at which being to mankind as mankind is to 1 amoeba, will only double the ability compared to mankind at best (or much less than double). The LW is full of intuitions where you say that it is to us as we are to 1 amoeba, in terms of computing power, and then it is intuited that it actually can do things as much better than we can, as we can vs amoeba. Which just ain’t so.
I don’t think informal arguments can convince people on topics where they have made up their minds. You need either a proof or empirical evidence.
Show us a self-improving something. Show us that it either does or doesn’t self-improve in surprising and alarming ways. Even if it’s self-improving only in very narrow limited ways, that would be illuminating.
Explain how various arguments would apply to real existing AI-ish systems, like self-driving cars, machine translation, Watson, or a web search engine.
Give a proof that some things can or can’t be done. There is a rich literature on uncomputable and intractable problems. We do know how to prove properties of computer programs I am surprised at how little it gets mentioned on LW.
I’ve posted on that also. For example the predictions are fighting against butterfly effect, and at best double in time when you square the computing power (and that’s given unlimited knowledge of initial state!). It’s pretty well demonstrable on the weather, for instance, but of course rationalizers can always argue that it ‘wasn’t demonstrated’ for some more complex cases. There are things at which being to mankind as mankind is to 1 amoeba, will only double the ability compared to mankind at best (or much less than double). The LW is full of intuitions where you say that it is to us as we are to 1 amoeba, in terms of computing power, and then it is intuited that it actually can do things as much better than we can, as we can vs amoeba. Which just ain’t so.