You don’t seem to want to say anything about how you are so confident. Can you say something about why you don’t want to give an argument for your confidence? Is it just too obvious to bother explaining? Or is there too large an inferential distance even with LW readers?
Tried writing a paragraph or two of explanation, gave it up as too large a chunk. It also feels to me like I’ve explained this three or four times previously, but I can’t remember exactly where.
I think I understand basically your objections, at least in outline. I think there is some significant epistemic probability that they are wrong, but even if they are correct I don’t think it at all rules out the possibility that a boxed unfriendly AI can give you a really friendly AI. My most recent post takes the first steps towards doing this in a way that you might believe.
You don’t seem to want to say anything about how you are so confident. Can you say something about why you don’t want to give an argument for your confidence? Is it just too obvious to bother explaining? Or is there too large an inferential distance even with LW readers?
...
Tried writing a paragraph or two of explanation, gave it up as too large a chunk. It also feels to me like I’ve explained this three or four times previously, but I can’t remember exactly where.
If anyone can find it please post! It seems to me to be contrary to Einstein’s Arrogance, so I’m interested to see why it’s not.
I think I understand basically your objections, at least in outline. I think there is some significant epistemic probability that they are wrong, but even if they are correct I don’t think it at all rules out the possibility that a boxed unfriendly AI can give you a really friendly AI. My most recent post takes the first steps towards doing this in a way that you might believe.