You (or, rather, Dr. Hanson) should definitely rename futarchy… I can’t stop thinking about it meaning rule of futanaris :D
On a more serious note, I think the allegory fails to disentangle the faults of Bayes’s Law being necessary to follow and the faults of the need to maintain the corresponding bureaucracy.
Дмитрий Зеленский
While admitting to be ignorant of most of the current evidence, I have to note my priors are currently strongly in favor of criminalization (at least for traders and for free use; using in hospitals and in research may or may not be different). Marijuana use, from what I know, lowers IQ by several integer points for some time, causing this (by being a trader or as in the next sentence) is a crime by itself (arguably worse than, say, breaking someone’s toe which is clearly criminal). Decriminalization would cause a temporary spike in its use, and for that see above. De-decriminalization is likely to cause turmoil because people are opposed to change. And the possibly-strawman argument against criminalization I just adapted from somewhere in my head that you can implant “trading quantities” of a drug to frame someone does not work: you can likewise implant any substance which is forbidden to be kept freely, like, say, an explosive.
“The Duplicator (not yet posted on LW)”—now posted, n’est ce-pas?
Unfortunately, it is quite difficult to taboo a term when discussing how (mis)interpretation of said term influenced a survey.
Moreover, even if my understanding is ultimately not what the survey-makers had in mind, the responding researchers having the same understanding as me would be enough to get the results in the OP.
I would say that, in ideal world, the relevant skill/task is “given the analysis already at hand, write a paper that conveys it well” (and it is alarming that this skill becomes much more valuable than the analysis itself, so people get credit for others’ analyses even when they clearly state that they merely retell it). And I fully believe that both the task of scientific analysis (outputting the results of the analysis, not its procedure, because that’s what needed for non-meta-purposes!) and the task outlined above will be achieved earlier than an AI that can actually combine them to write a paper from scratch. AND that each new simple task in the line to the occupation further removes their combination even after the simple task itself is achieved.
“In your mind’s eye, it seems, you can see before you the many could-worlds that follow from one real world.” Isn’t it exactly what many-worlds interpretation does to QM (to keep it deterministic, yada-yada-yada; to be fair, Brandon specifically stated he is not considering the QM sense, but I am not sure the sense he suggested himself is distinct)? There are worlds that are (with not-infinitesimally-low probability-mass) located in the future of the world we are now (and they are multiple), and there are worlds that are not. The former are “realizable”, and they “follow”—and whether they are reachable depends on how good the “forward search process that labels certain options as reachable before judging them and maximizing” is. My intuition says that “could” can mean the former, rather than “whatever my mind generated in the search as options” (and, moreover, that the latter is a heuristics of the mind for the former). (Unless, of course, the real bomb under this definition is in “probability-mass” hiding the same “could-ness”, but if you are going to tell me that QM probability-mass is likewise reducible to labeling by a search process and this is the “correct answer”, I will find this… well, only mildly surprising, because QM never ceases to amaze me, which influences my further evaluations, but at least I don’t see how this obviously follows from the QM sequence.)
Moreover, this quotation from Possibility and Could-ness seems to hint to a similar (yet distinct, because probability is in the mind) problem.
> But you would have to be very careful to use a definition like that one consistently. “Could” has another closely related meaning in which it refers to the provision of at least a small amount of probability.
Well, that’s not quite true. Let’s go to the initial example: you need to write a linguistic paper. To this, you need at least two things: perform the lingustic analysis of some data and actually put it in words. Yet the latter needs the internal structure of the former, not just the end result (as would most currently-practical applications of a machine that does a linguistic analysis). The logic behind trees, for instance, not just a tree-parsed syntactic corpus. A neural network (RNN or something) making better and quicker tree-parsed syntactic corpora than me would just shrug (metaphorically) if asked for the procedure of tree-making. I am near-certain other sciences would show the same pattern for their papers.
Managing AI would also have to manually handle information flow between other AIs more generally, which is kinda “automatic” for human minds (though with some important exceptions, leading to the whole idea of mental modules a la Fodor).
Well, it were specifically B1 mass production droids which were made incredibly cheap and so with, let’s say, not the best AI ever. A rare model like HK-47 was superior to usual (neither Force-amplified nor decades-of-training-behind-Mandalore) humans; and the latter case could also be a difference in available weaponry (if your weapon cannot penetrate amplified beskar armor and you only find this out in the moment of attack, you’d need to be very smart to immediately find a way to win or retreat before the Battle Reflexes guy shuts you off).
As for FTL—I wouldn’t be so sure, history of research sometimes makes strange jumps. Romans were this close to going all steampunk, and a naive modern observer could say “having steam machines without gunpowder seems unlikely”. Currently we don’t know what, if anything, could provide FTL, and the solution could jump on us unexpectedly and unrelatedly to AI development.
I am afraid I cannot agree. For one, this would require a further 21 AI, the “managing AI”, that does the combining. Moreover, the data exchange between these narrow AI may be slower and/or worse (especially considering that many of the strong domain-specific AI don’t really have extractable internal data of any use).
It can and usually does. Note that we do see some scenes where a pilot leaves the ship and it, seemingly by itself, flies away to park or something (for instance, R4 does it in Episode III to Obi-Wan’s ship IIRC). It might actually be a funny story of each side using organic pilots because the other side uses human pilots and astrodroids are not that good in predicting organics’ behavior, so it is just a Pareto equilibrium.
I would replace “introduced” to “sold” or “made interesting” here. It’s not enough to introduce a group of people to something—unless their values are already in sync with said something’s _appearance_ (and the appearance, aka elevator pitch, aka hook, is really important here), you would need to apply some marketing/Dark Arts/rhetorics/whatever-you-call-it to persuade them it’s worth it. And, for all claims of “Rationalists should win”, Yudkowsky2008 was too much of a rhetorics-hater (really, not noticing his own pattern of having the good teachers of Defence against the Dark Arts in Hogwarts themselves practicing Dark Arts (or, in case of Lupin, *being* Dark Arts)?) to perform that marketing, and thus the blog went to attract people who already shared the values—nerdy booksmarts (note that a)to the best of my knowledge, HPMoR postdates Sequences; b)Harry Potter isn’t exactly a booksmart-choosing fandom, as is shown by many factors including the gross proportion of “watched-the-films-never-read-the-books” fans against readers AND people who imagine Draco Malfoy to be a refined aristocrat whose behavior is, though not nice, perfectly calibred instead of the petty bully we see in both books and films AND—I should stop here before I go on a tangent; so I am not certain how much “Harry Potter fans” is relevant).
“He’s an unmarried man, but is he a bachelor?” This is a ‘closed’ question. The answer is obviously “Yes.”
This is a false claim, unfortunately. Bachelor is not merely an “unmarried man”, it is an “unmarried man who could’ve been married in his society” (as all the long-discussed things like “#My 5-year-old son is a bachelor” and “#The Pope is a bachelor” show). ETA: the part beginning with “who” is probably a presupposition rather than assertion (“The Pope is not a bachelor” is only felicitous if used as metalinguistic “The Pope cannot be described by the word ‘bachelor’”, not if used in the literal sense “The Pope is married although it is not allowed”).
Austere Metaethicist: Your definition doesn’t connect to reality. It’s like talking about atom-for-atom ‘indexical identity’ even though the world is made of configurations and amplitudes instead of Newtonian billiard balls. Gods don’t exist.
This one is also not obviously true. We can ask what Sherlock Holmes would approve of despite the fact that he never existed (and I can imagine a morality that says “good is what Sherlock approves of”—a strange morality though it would be). Why can’t we take “an essentially just and loving God” as a similar literature character?
Subject: phonological theories
Recommendation: Routledge’s Handbook on Phonological Theory
Strengthes: each chapter on an approach is written by a specialist in that approach, clearly explaining what the ideas are.
Alternatives: the relevant section of Kodzasov & Krivnova’s “Obshchaya fonetika” (short and obscuring some very important points, as well as leaving out some approaches); Philip Carr’s “Phonology” (somewhat outdated − 1993 - and makes that unpleasant trick with “this is our current theory… now let’s look how it’s wrong and adopt a better theory… and again… and again”—while this is akin to how scientific thought goes it doesn’t necessarily do justice to the theories in question).
Sorry, but I do not think that this is a term disagreement and that your “strong-manning” is faithful to my comment. I believe that Scott’s idea is somewhat inconsistent because he puts an individual example against a general idea; a specific structure against a set of psychologies.
Sounds like a (much better than original) explanation of Igor Mel’čuk’s “structural model” vs. “functional model”. An old topic in linguistics and, arguably, other cognitive sciences.
Infertility does not entail non-producing of hormones (the most obvious examples being vasectomy in males and the operation on tubes what’s-its-name in females). It is pretty unlikely that COVID-19 actually castrates its victims; it is testable, though, by measuring levels of testosterone and estrogenes.
suppoesd—should read supposed
I wish it came with an explanation what _exactly_ Impatience and Hybris virtues entail (given that both are generally described as non-virtues but I do seem to have the feeling that they can be good; same works for Laziness, but here I believe I have better understanding already).
“But there are problems of principal here.”—principle?