Nobody was ever tempted to say, “But as the nuclear chain reaction grows in power, it will necessarily become more moral!”
We became better at constructing nuclear power plants, and nuclear bombs became cleaner. What critics are saying is that as AI advances, our control over it advances as well. In other words, the better AI becomes, the better we become at making AI work as expected. Because if AI became increasingly unreliable as its power grew, AI would cease to be a commercially viable product.
That’s one of the standard responses to the MIRI argument, but not the same as the Artificial Philosopher response. I call it the SIRI versus MIRI response.
I doubt about SIRI, but I think the plausibility of AI risk has already been shown in MIRI’s writing and I don’t see much point in repeating the arguments here. Regarding Pascal’s mugging, I believe in bounded utility functions. So, yea, something with low probability and dire consequences is important up to a point. But AI risk is not even something I’d say has low probability.
We became better at constructing nuclear power plants, and nuclear bombs became cleaner. What critics are saying is that as AI advances, our control over it advances as well. In other words, the better AI becomes, the better we become at making AI work as expected. Because if AI became increasingly unreliable as its power grew, AI would cease to be a commercially viable product.
That’s one of the standard responses to the MIRI argument, but not the same as the Artificial Philosopher response. I call it the SIRI versus MIRI response.
That would be small comfort if WWIII erupted triggering a nuclear winter.
A doomsday device doesn’t have to be a commercially viable product. It just has to be used, once.
Unless you can show it is reasonably likely that SIRI will take over the world, that is a Pascal’s mugging.
I doubt about SIRI, but I think the plausibility of AI risk has already been shown in MIRI’s writing and I don’t see much point in repeating the arguments here. Regarding Pascal’s mugging, I believe in bounded utility functions. So, yea, something with low probability and dire consequences is important up to a point. But AI risk is not even something I’d say has low probability.