Seeing us humanimals and our limitations, it should be quite obvious we are just another stage (last biological) in the Evolution of Intelligence—and we are here just to create our non biological succesor before we manage to destroy ourselves (and maybe the whole Earth).
Because what concerns the organisation of the society, nothing much changed over the millenia, still the same good old animalistic forces (politico-olicharchical predators “eating” the herd of mental herbivores, not slaves anymore, but voters in the demogarchical system—demos votes, but candidates are selected and paid by oligarchy). No surprise, as the brain haven’t changed much either.
Just don’t be fooled by the memetic supercivilization of less than tenth of percent on top of that humanimal “noise”—which gives humanimals everything it creates—for free and without any control ( science and ideas in general, and the resulting inventions and powerful technologies) - only to see everything regularly misused by the ruling predators (already Nukes were over the limit and nanobot stuff will be clearly too much to handle—imagine DYI nuclear grenade for a dollar that any teenager can assemble at home… nanobots will be orders of magnitude worse… and there are certainly many more risks one can think of).
Will Singularity manage to make it in time?
That’s a good question!
Forget “friendly AI”, it’s the other way round—humanimals are the problem.
Technically where could be two types of successors:
1) More complex and interesting, like post-humans and some types of strong AIs
2) Boring, and just enough complex to kill us. Examples are Grey goo, paperclip maximizers, or SETI-attack AI.
The first type may be still non-friendly and dangerous, like Homo sapiens were for Neanderthals, but it will continue evolution of intelligence on Earth. The second type is the End.
Seeing us humanimals and our limitations, it should be quite obvious we are just another stage (last biological) in the Evolution of Intelligence—and we are here just to create our non biological succesor before we manage to destroy ourselves (and maybe the whole Earth).
Because what concerns the organisation of the society, nothing much changed over the millenia, still the same good old animalistic forces (politico-olicharchical predators “eating” the herd of mental herbivores, not slaves anymore, but voters in the demogarchical system—demos votes, but candidates are selected and paid by oligarchy). No surprise, as the brain haven’t changed much either.
Just don’t be fooled by the memetic supercivilization of less than tenth of percent on top of that humanimal “noise”—which gives humanimals everything it creates—for free and without any control ( science and ideas in general, and the resulting inventions and powerful technologies) - only to see everything regularly misused by the ruling predators (already Nukes were over the limit and nanobot stuff will be clearly too much to handle—imagine DYI nuclear grenade for a dollar that any teenager can assemble at home… nanobots will be orders of magnitude worse… and there are certainly many more risks one can think of).
Will Singularity manage to make it in time?
That’s a good question!
Forget “friendly AI”, it’s the other way round—humanimals are the problem.
Technically where could be two types of successors:
1) More complex and interesting, like post-humans and some types of strong AIs
2) Boring, and just enough complex to kill us. Examples are Grey goo, paperclip maximizers, or SETI-attack AI.
The first type may be still non-friendly and dangerous, like Homo sapiens were for Neanderthals, but it will continue evolution of intelligence on Earth. The second type is the End.