after first few lines I wanted to comment that seeing almost religious fervor in combination with self named CRITICAL anything reminds me of all sorts of “critical theorists”, also quite “religiously” inflamed… but I waited till the end, and got a nice confirmation by that “AI rights” line… looking forward to see happy paperclip maximizers pursuing their happiness, which is their holy right (and subsequent #medeletedtoo)
otherwise, no objections to Popper and induction, nor to the suggestion that AGIs will most probably think like we do (and yes, “friendly” AI is not really a rigorous scientific term, rather a journalistic or even “propagandistic” one)
also, it’s quite likely that at least in the short-term horizon, humANIMALs more serious threat than AIs (deadly combination of “natural stupidity” and DeepAnimal brain parts—having all that powers given to them by Memetic Supercivilization of Intelligence, living currently on humanimal substrate, though <1%)
but this “impossibility of uploading” is a tricky thing—who knows what can or cannot be “transferred” and to what extent will this new entity resemble the original one, not talking about subsequent diverging evolution(in any case, this may spell the end of CR if the disciples forbid uploading for themselves… and others will happily upload to this megacheap and gigaperformant universal substrate)
and btw., it’s nice to postulate that “AI cannot recursively improve itself” while many research and applied narrow AIs are actually doing it right at this moment (though probably not “consciously”)
sorry for my heavily nonrigorous, irrational and nonscientific answers, see you in the uploaded self-improving Brave New World
and btw., it’s nice to postulate that “AI cannot recursively improve itself” while many research and applied narrow AIs are actually doing it right at this moment (though probably not “consciously”)
Please quote me accurately. What I wrote was:
AI cannot recursively self-improve so that it acquires knowledge creation potential beyond what human beings already have
I am not against the idea that an AI can become smarter by learning how to become smarter and recursing on that. But that cannot lead to more knowledge creation potential than humans already have.
after first few lines I wanted to comment that seeing almost religious fervor in combination with self named CRITICAL anything reminds me of all sorts of “critical theorists”, also quite “religiously” inflamed… but I waited till the end, and got a nice confirmation by that “AI rights” line… looking forward to see happy paperclip maximizers pursuing their happiness, which is their holy right (and subsequent #medeletedtoo)
otherwise, no objections to Popper and induction, nor to the suggestion that AGIs will most probably think like we do (and yes, “friendly” AI is not really a rigorous scientific term, rather a journalistic or even “propagandistic” one)
also, it’s quite likely that at least in the short-term horizon, humANIMALs more serious threat than AIs (deadly combination of “natural stupidity” and DeepAnimal brain parts—having all that powers given to them by Memetic Supercivilization of Intelligence, living currently on humanimal substrate, though <1%)
but this “impossibility of uploading” is a tricky thing—who knows what can or cannot be “transferred” and to what extent will this new entity resemble the original one, not talking about subsequent diverging evolution(in any case, this may spell the end of CR if the disciples forbid uploading for themselves… and others will happily upload to this megacheap and gigaperformant universal substrate)
and btw., it’s nice to postulate that “AI cannot recursively improve itself” while many research and applied narrow AIs are actually doing it right at this moment (though probably not “consciously”)
sorry for my heavily nonrigorous, irrational and nonscientific answers, see you in the uploaded self-improving Brave New World
Please quote me accurately. What I wrote was:
I am not against the idea that an AI can become smarter by learning how to become smarter and recursing on that. But that cannot lead to more knowledge creation potential than humans already have.