Which is a good argument for why a smart AI wouldn’t announce its malicious intentions by sending some sort of universal computer code—which could ultimately announce its intentions, yet have a significant chance of failure—and would just straight send its little optimizing cloud of nanomagic.
The first indication that something’s wrong would be your legs turning into paperclips (The tickets are now diamonds—style).
It may also be, that a well designed radio vawe front colliding with a planet or a gas cloud can produce some artifacts. That a SETI capable civilisation isn’t even necessary.
I will when I figure out how to solve this problem: I’m trying to accomplish two major objectives.
The more important objective is to explain to people how we can use concepts from mathematical fields, especially algorithmic information theory and reflective decision theory, to elucidate the fundamental nature of justification, especially any fundamental similarities or relations between epistemic and moral justification. (The motivation for this approach comes from formal epistemology; I’m not sure if I’ll have to spend a whole post on the motivations or not.)
The less important objective is to show that theology, or more precisely theological intuitions, are a similar approach to the same problem, and it makes sense and isn’t just syncretism to interpret theology in light of (say) algorithmic information theory and vice versa. But to motivate this would require many posts on hermeneutics; without sufficient justification, readers could reasonably conclude that bringing in “God” (an unfortunately political concept) is at best syncretism and at worst an attempt to force through various connotations. I’m more confident when it comes to explaining the math—even if I can be accused of overreaching with the concepts, at least it’s admitted that the concepts themselves have a very solid foundation. When it comes to hermeneutics, though, I inevitably have to make various qualitative arguments and judgment calls about how to make judgment calls, and I’m afraid of messing it up; also I’m just more likely to be wrong.
So I have to think about whether to try to tackle both problems at once, which I would like to do but would be quite difficult, or to just jump into the mathematics without worrying so much about tying it back to the philosophical tradition. I’d really prefer the former but I haven’t yet figured out how to make the presentation (e.g., the order of ideas to be introduced) work.
especially any fundamental similarities or relations between epistemic and moral justification
So, the fact that in natural languages it’s easy to be ambiguous between epistemic and moral modality (e.g. should in English can mean either ‘had better’ or ‘is most likely to’) may be a Feature Not A Bug? (Well, I think that that is due to a quirk of human psychology¹, but if humans have that quirk, it must have been adaptive (or a by-product of something adaptive), in the EEA at least.)
How common is this among the world’s languages? The more common it is, the more likely my hypothesis, I’d guess.
A near light speed and the actual light speed may be a significant difference where the universal dominance is the price.
Which is a good argument for why a smart AI wouldn’t announce its malicious intentions by sending some sort of universal computer code—which could ultimately announce its intentions, yet have a significant chance of failure—and would just straight send its little optimizing cloud of nanomagic.
The first indication that something’s wrong would be your legs turning into paperclips (The tickets are now diamonds—style).
Agree.
It may also be, that a well designed radio vawe front colliding with a planet or a gas cloud can produce some artifacts. That a SETI capable civilisation isn’t even necessary.
The optimizer your optimizer could optimize like.
Talking about triple-O, go continue your computational theology blog o.O
I will when I figure out how to solve this problem: I’m trying to accomplish two major objectives.
The more important objective is to explain to people how we can use concepts from mathematical fields, especially algorithmic information theory and reflective decision theory, to elucidate the fundamental nature of justification, especially any fundamental similarities or relations between epistemic and moral justification. (The motivation for this approach comes from formal epistemology; I’m not sure if I’ll have to spend a whole post on the motivations or not.)
The less important objective is to show that theology, or more precisely theological intuitions, are a similar approach to the same problem, and it makes sense and isn’t just syncretism to interpret theology in light of (say) algorithmic information theory and vice versa. But to motivate this would require many posts on hermeneutics; without sufficient justification, readers could reasonably conclude that bringing in “God” (an unfortunately political concept) is at best syncretism and at worst an attempt to force through various connotations. I’m more confident when it comes to explaining the math—even if I can be accused of overreaching with the concepts, at least it’s admitted that the concepts themselves have a very solid foundation. When it comes to hermeneutics, though, I inevitably have to make various qualitative arguments and judgment calls about how to make judgment calls, and I’m afraid of messing it up; also I’m just more likely to be wrong.
So I have to think about whether to try to tackle both problems at once, which I would like to do but would be quite difficult, or to just jump into the mathematics without worrying so much about tying it back to the philosophical tradition. I’d really prefer the former but I haven’t yet figured out how to make the presentation (e.g., the order of ideas to be introduced) work.
So, the fact that in natural languages it’s easy to be ambiguous between epistemic and moral modality (e.g. should in English can mean either ‘had better’ or ‘is most likely to’) may be a Feature Not A Bug? (Well, I think that that is due to a quirk of human psychology¹, but if humans have that quirk, it must have been adaptive (or a by-product of something adaptive), in the EEA at least.)
How common is this among the world’s languages? The more common it is, the more likely my hypothesis, I’d guess.