The post doesn’t talk about nor imply a traditional war with congress approval. For example, placing a battleship in international waters but close enough China’s maritime space is enough to trigger another Arkhipov situation. This is just a specific scenario, some don’t lead to disastrous outcomes and some do. The post is intended to spark discussion, not set policy without debate, and point to the risks. As stated we entered 2020 in the worst level of risk ever, and then a pandemic happened, with a bunch of unthinkable events happening.
dyokomizo
In my understanding, ban in sequences-inspired rationality, particularly for politically-charged topics, is a reminder that Politics is the Mind-Killer. I made it explicit in the text.
Wikipedia has an article for Considered Harmful. “Goto Considered Harmful” was the title an editor gave to Dijkstra’s paper originally titled “A Case Against the Goto Statement”. It’s an informal tradition in computer science to write papers with this title pattern, including “”Considered Harmful” Essays Considered Harmful”.
It was not intended to be misleading, only a reference to a crowd that I, perhaps erroneously, assumed would be familiar with this pattern.
Now, on the core of the argument. First the epistemic status says it’s uncertain about risk values and how to reduce it. I linked to a Bulletin of Atomic Scientists article about why this debunked idea still keeps coming up and the harms associated with it. Just printing articles and pointing people to them wasn’t enough. I don’t have more to say about your specific arguments because I think they’re covered pretty well by the article I linked.
This post was to point out that this problem exists, that credible experts in extinction risk (i.e. The Bulletin of Atomic Scientists) think it’s a worrying trend, that economic patterns are similar to past situations giving birth to extreme right wing governments and that current institutions seem to be unable or unwilling to curb Trump’s excesses.
He seems to respond to actual public opinion (from his electorate).
The article also ends with a non-rhetorical that seems to be misunderstood as alarmism.
miniKanren is a logic/relational language. It’s been used to solve questions related to programs. For example, once you give miniKanren a description of the untyped λ-calculus extended with integers you can ask it “give me programs that result in 2” and it’ll enumerate programs from the constant “2″ to “1 + 1” to more complicated versions using λ-expressions. It can even find quines (if the described language supports it).
The Nanopass Framework is built for that:
“The nanopass framework provides a tool for writing compilers composed of several simple passes that operate over well-defined intermediate languages. The goal of this organization is both to simplify the understanding of each pass, because it is responsible for a single task, and to simplify the addition of new passes anywhere in the compiler.”
I’m going.
I’m going again, it was too fun/interesting to miss.
Count me in.
Around São Paulo, yes. Around LW, not much anymore, I mostly read it via feed reader.
This model seems to be reducible to “people will eat what they prefer”.
A good model would be able to reduce the number of bits to describe a behavior, if the model requires to keep a log (e.g. what particular humans prefer to eat) to predict something, it’s not much less complex (i.e. bit encoding) than the behavior.
I agree vague is not a good word choice. Irrelevant (using relevancy as it’s used to describe search results) is a better word.
I would classify such kinds of predictions as vague, after all they match equally well for every human being in almost any condition.
There’s no way to create a non-vague, predictive, model of human behavior, because most human behavior is (mostly) random reaction to stimuli.
Corollary 1: most models explain after the fact and require both the subject to be aware of the model’s predictions and the predictions to be vague and underspecified enough to make astrology seems like spacecraft engineering.
Corollary 2: we’ll spend most of our time in drama trying to understand the real reasons or the truth about our/other’s behavior even when presented with evidence pointing to the randomness of our actions. After the fact we’ll fabricate an elaborate theory to explain everything, including the evidence, but this theory will have no predictive power.
It doesn’t seem to me that you have an accurate description of what a super-smart person would do/say other than match your beliefs and providing insightful thought. For example, do you expect super-smart people to be proficient in most areas of knowledge or even able to quickly grasp the foundations of different areas through super-abstraction? Would you expect them to be mostly unbiased? Your definition needs to be more objective and predictive, instead of descriptive.
How would you describe the writing patterns of super-smart people? Similarly, how would meeting/talking/debating them would feel like?
Hi, I’m Daniel. I’ve read OB for a long time and followed on LW right in the beginning, but work /time issues in the last year made my RSS reading queue really long (I had all LW posts in the queue). I’m a Brazilian programmer, long time rationalist and atheist.
Hi, I’m a lurker mostly because I was reading these off my RSS queue (I accumulated thousands of entries in my RSS reader in the last year due to work/time issues),
Sao Paulo, Brazil
FWIW we implemented the FDT, CDT, and EDT in Haskell a while ago.
https://github.com/DecisionTheory/DecisionTheory