I agree that certain “organizations” can be very, very dangerous. That’s one reason why we want to create AI...because we can use it to beat these organizations (as well as fix/greatly reduce many other problems in society).
I hold that Unfriendly AI+ will be more dangerous, but, if these “organizations” are as dangerous as you say, you are correct that we should put some focus on them as well. If you have a better plan to stop them than creating Friendly AI, I’d be interested to hear it. The thing you might be missing is that AI is a positive factor in global risk as well, see Yudkowsky’s relevant paper.
Thanks for the quick reply.
I agree that certain “organizations” can be very, very dangerous. That’s one reason why we want to create AI...because we can use it to beat these organizations (as well as fix/greatly reduce many other problems in society).
I hold that Unfriendly AI+ will be more dangerous, but, if these “organizations” are as dangerous as you say, you are correct that we should put some focus on them as well. If you have a better plan to stop them than creating Friendly AI, I’d be interested to hear it. The thing you might be missing is that AI is a positive factor in global risk as well, see Yudkowsky’s relevant paper.