Yes. My (admittedly poor) judgment is that while I’d certainly be crushed by an unfriendly intelligence explosion, I probably wouldn’t survive long in something like Robin Hanson’s “em world” either.
But answering the intelligence explosion question becomes important when it comes to strategies for surviving the development of above-human-level AI. If unfriendly intelligence explosions are likely then it severely limits which strategies will work. If friendly intelligence explosions are possible then it suggests a strategy which might work.
Yes. My (admittedly poor) judgment is that while I’d certainly be crushed by an unfriendly intelligence explosion, I probably wouldn’t survive long in something like Robin Hanson’s “em world” either.
But answering the intelligence explosion question becomes important when it comes to strategies for surviving the development of above-human-level AI. If unfriendly intelligence explosions are likely then it severely limits which strategies will work. If friendly intelligence explosions are possible then it suggests a strategy which might work.