I doubt about SIRI, but I think the plausibility of AI risk has already been shown in MIRI’s writing and I don’t see much point in repeating the arguments here. Regarding Pascal’s mugging, I believe in bounded utility functions. So, yea, something with low probability and dire consequences is important up to a point. But AI risk is not even something I’d say has low probability.
That would be small comfort if WWIII erupted triggering a nuclear winter.
A doomsday device doesn’t have to be a commercially viable product. It just has to be used, once.
Unless you can show it is reasonably likely that SIRI will take over the world, that is a Pascal’s mugging.
I doubt about SIRI, but I think the plausibility of AI risk has already been shown in MIRI’s writing and I don’t see much point in repeating the arguments here. Regarding Pascal’s mugging, I believe in bounded utility functions. So, yea, something with low probability and dire consequences is important up to a point. But AI risk is not even something I’d say has low probability.