What I am arguing is that the power of intelligence is vastly overestimated and therefore any associated risks. There are many dumb risks that can easily accomplish the same, wipe us out. It doesn’t need superhuman intelligence to do that. I also do not see enough evidence for the premise that other superior forms of intelligence are very likely to exist. Further I argue that there is no hint of any intelligence out there reshaping its environment. The stars show no sign of intelligent tinkering. I provided many other arguments for why other risks might be more worthy of our contribution. I came up with all those ideas in the time it took to write those comments. I simply expect a lot more arguments and other kinds of evidence supporting their premises from an organisation that has been around for over 10 years.
There are many dumb risks that can easily accomplish the same, wipe us out. It doesn’t need superhuman intelligence to do that.
Yes, there are dumb risks that could wipe us out just as well: but only a superhuman intelligence with different desires than humanity is guaranteed to wipe us out.
I also do not see enough evidence for the premise that other superior forms of intelligence are very likely to exist.
You don’t need qualitative differences: just take a human-level intelligence and add on enough hardware that it can run many times faster than the best of human thinkers, and hold far more things in its mind at once. If it came to a fight, the humanity of 2000 could easily muster the armies to crush the best troops of 1800 without trouble. That’s just the result of 200 years of technological development and knowledge acquisition, and doesn’t even require us to be more intelligent than the humans of 2000.
Further I argue that there is no hint of any intelligence out there reshaping its environment.
We may not have observed aliens reshaping their environment, but we can certainly observe humans reshaping their environment. This planet is full of artificial structures. We’ve blanketed the Earth with lights that can be seen anywhere where we’ve bothered to establish habitation. We’ve changed the Earth so much that we’re disturbing global climate patterns, and now we’re talking about large-scale engineering work to counteract those disturbances. If I choose to, there are ready transportation networks that will get me pretty much anywhere on Earth, and ready networks for supplying me with food, healthcare and entertainment on all the planet’s continents (though admittedly Antarctica is probably a bit tricky from a tourist’s point of view).
I also do not see enough evidence for the premise that other superior forms of intelligence are very likely to exist.
It seems as though it is rather easy to imagine humans being given the “Deep Blue” treatment in a wide range of fields. I don’t see why this would be a sticking point. Human intelligence is plainly just awful, in practically any domain you care to mention.
Further I argue that there is no hint of any intelligence out there reshaping its environment.
Uh, that’s us. wave
In case you didn’t realise, humanity is the proof of concept that superior intelligence is dangerous. Ask a chimpanzee.
I also do not see enough evidence for the premise that other superior forms of intelligence are very likely to exist.
Have you taken an IQ test? Anyone who scores significantly higher than you constitutes a superior form of intelligence.
There are many dumb risks that can easily accomplish the same, wipe us out. It doesn’t need superhuman intelligence to do that.
Few such dumb risks are being pursued by humanity. Superhuman intelligence solves all dumb risks unless you postulate a dumb risk that is in principle unsolvable. Something like collapse of vacuum energy might do it.
Contributing to the creation of FAI doesn’t just decrease the likelihood of UFAI, it also decreases the likelihood of all the other scenarios that end up with humanity ceasing to exist.
What I am arguing is that the power of intelligence is vastly overestimated and therefore any associated risks. There are many dumb risks that can easily accomplish the same, wipe us out. It doesn’t need superhuman intelligence to do that. I also do not see enough evidence for the premise that other superior forms of intelligence are very likely to exist. Further I argue that there is no hint of any intelligence out there reshaping its environment. The stars show no sign of intelligent tinkering. I provided many other arguments for why other risks might be more worthy of our contribution. I came up with all those ideas in the time it took to write those comments. I simply expect a lot more arguments and other kinds of evidence supporting their premises from an organisation that has been around for over 10 years.
Large brains can be dangerous to those who don’t have them. Look at the current human-caused mass extinction.
Yes, there are dumb risks that could wipe us out just as well: but only a superhuman intelligence with different desires than humanity is guaranteed to wipe us out.
You don’t need qualitative differences: just take a human-level intelligence and add on enough hardware that it can run many times faster than the best of human thinkers, and hold far more things in its mind at once. If it came to a fight, the humanity of 2000 could easily muster the armies to crush the best troops of 1800 without trouble. That’s just the result of 200 years of technological development and knowledge acquisition, and doesn’t even require us to be more intelligent than the humans of 2000.
We may not have observed aliens reshaping their environment, but we can certainly observe humans reshaping their environment. This planet is full of artificial structures. We’ve blanketed the Earth with lights that can be seen anywhere where we’ve bothered to establish habitation. We’ve changed the Earth so much that we’re disturbing global climate patterns, and now we’re talking about large-scale engineering work to counteract those disturbances. If I choose to, there are ready transportation networks that will get me pretty much anywhere on Earth, and ready networks for supplying me with food, healthcare and entertainment on all the planet’s continents (though admittedly Antarctica is probably a bit tricky from a tourist’s point of view).
It seems as though it is rather easy to imagine humans being given the “Deep Blue” treatment in a wide range of fields. I don’t see why this would be a sticking point. Human intelligence is plainly just awful, in practically any domain you care to mention.
Uh, that’s us. wave
In case you didn’t realise, humanity is the proof of concept that superior intelligence is dangerous. Ask a chimpanzee.
Have you taken an IQ test? Anyone who scores significantly higher than you constitutes a superior form of intelligence.
Few such dumb risks are being pursued by humanity. Superhuman intelligence solves all dumb risks unless you postulate a dumb risk that is in principle unsolvable. Something like collapse of vacuum energy might do it.
Contributing to the creation of FAI doesn’t just decrease the likelihood of UFAI, it also decreases the likelihood of all the other scenarios that end up with humanity ceasing to exist.