Normally yes, but this case involves a potentially adversarial agent with intelligence and optimizing power vastly superior to your own, and which cares about your epistemic state as well as your actions.
Look, my post addressed these issues, and I’d be happy to discuss them further, if the ground rules were clear. Right now, we’re not having that discussion; we’re talking about whether that discussion is desirable, and if so, how to make it possible. I think that the truth will out; if you’re right, you’ll probably win the discussion. So although we disagree on danger, we should agree on discussing danger within some well-defined ground rules which are comprehensibly summarized in some safe form.
Normally yes, but this case involves a potentially adversarial agent with intelligence and optimizing power vastly superior to your own, and which cares about your epistemic state as well as your actions.
Look, my post addressed these issues, and I’d be happy to discuss them further, if the ground rules were clear. Right now, we’re not having that discussion; we’re talking about whether that discussion is desirable, and if so, how to make it possible. I think that the truth will out; if you’re right, you’ll probably win the discussion. So although we disagree on danger, we should agree on discussing danger within some well-defined ground rules which are comprehensibly summarized in some safe form.
Really? Go read the sequences! ;)
Hell? That’s it?