I suspect that massive destabilization following the precipitous fall of most of the great powers (NATO + Russia at the least) would result in war on every continent (sans Antarctica). If Asian countries don’t get nuked in this scenario like you suppose, I think it’s quite plausible general war in Asia would follow shortly as the surviving greatest powers jockey for dominance. If we posit the complete collapse of U.S. power projection in the Pacific, surely China is best positioned to fill the void, and I don’t think it’s clear where they’d draw the new lines.
Ansel
In practice, leading thinkers in EA seem to interpret AGI as a special class of existential threat (i.e., something that could effectively ‘cancel’ the future)
This doesn’t seem right to me. “Can effectively ’cancel’ the future” seems like a pretty good approximation of the definition of an existential threat. My understanding of why A.I. risk is treated differently is because of a cultural commonality between said leading thinkers such that A.I. risk is considered to be a more likely and imminent threat than other X-risks. Along with a less widespread (I think) subset of concerns that A.I. can also involve S-risks that other threats don’t have an analogue to.
These are just one native speaker’s impressions, so take them with a grain of salt.
Your first two examples, to me, scan as being about abstract concepts; respectively: the emotion/quality of curiosity and the property of being in context.
This quora result indicates that it’s a quality of “definiteness” that indicates when articles get dropped (maybe as a second language learner you’re likely to already have this as knowledge, but find it difficult to intuit).
In those examples, the meaning doesn’t rely on pointing at two specific “curiosity” and “context” objects that have to be precisely designated, it relies on set phrases “out of curiosity” and “in context” that respectively describe an unmentioned action or object.
I think the article in the last example is dropped for a completely different reason. The “definiteness” argument doesn’t apply, but my instinct is that this is simple terseness in the communication from UI to user. Describing every UI element with precise language would result in web pages that resemble legal documents.
It’s possible you’re in Ease Hell. It has been a while since I got into the weeds with my settings but there are pretty good reasons to change the default ease settings and reset the ease on old cards, as I recall. I’m also in the camp of only using the “again” and “good” buttons, since the other ones affect ease iirc. Anyway you’ve been at it longer than I have but maybe the ease hell thing is new info for you or other anki users.
I wish the cuteness made a difference. Interesting reading though, thanks.
Is that link safe to click for someone with Arachnophobia?
I appreciate the clarification, at first #1 seemed dissonant to me (and #2 and #3 following from that) given the trope of highly inbred European nobility, but on further reflection that might be mostly a special case due to dispensations. I hadn’t thought of worldwide consanguination/marriage norms as a potential X factor for civilizational development, but it’s an interesting angle.
Just to clarify, with this sentence:
Christianity was also unusual in other potentially key dimensions—it dramatically promoted outbreeding (by outlawing inbreeding far beyond the typical), which plausibly permanently altered the european trajectory.
are you proposing that Christian Europe was historically successful in significant part due to inbreeding less than non-Christian-European civilizations? Is there somewhere I can read more about that thesis? I’m not familiar with it.
Without even getting into whether your specific reward heuristic is misaligned, it seems to me that you’d just shifted the problem slightly out of the focus of your description of the system, by specifying that all of the work will be done by subsystems that you’re just assuming will be safe.
“paperclip quality control” has just as much potential for misalignment in the limit as does paperclip maximization, depending on what kind of agent you use to accomplish it. So, even if we grant the assumption that your heuristic is aligned, we are merely left with the task of designing a bunch of aligned agents to do subtasks.
You imply that you understand it’s a metaphor, but your other sentences seem to insist on taking the word “wrestling” literally as referring to the sport. The sentence in bold
“This was no passive measure to confirm a hypothesis, but a wrestling with nature to make her reveal her secrets.”
Makes it pretty clear I think. Do you simply not like the metaphor?