I perceive dangerous recursive self-improvement as a natural implication of general intelligence to be as unlikely as an AGI that is automatically friendly.
Well already technological progress is acting in an autocatalytic fashion. Progress is fast, and numerous people are losing their jobs and suffering as a result. It seems likely that progress will get faster, and even more people will be affected by this kind of future shock.
We see autocatalytic improvements in technology taking place today—and they seem likely to be more common in the future.
Climbing the Tower of optimisation is not inevitable, but it looks as though it would take a totalitarian government to slow progress down.
Well already technological progress is acting in an autocatalytic fashion. Progress is fast, and numerous people are losing their jobs and suffering as a result. It seems likely that progress will get faster, and even more people will be affected by this kind of future shock.
We see autocatalytic improvements in technology taking place today—and they seem likely to be more common in the future.
Climbing the Tower of optimisation is not inevitable, but it looks as though it would take a totalitarian government to slow progress down.