The chances of the LLM being able to do this depend heavily on how similar the subjects discussed in the alien language are to things humans discuss. Removing areas where there is most likely to be similarity would reduce the chance that the LLM would find matching patterns in both. Indeed, that we’re imagining aliens for the example already probably greatly increases the difficulty for the LLM.
Protagoras
Agreed. An AI powerful enough to be dangerous is probably in particular better at writing code than us, and at least some of those trying to develop AI are sure to want to take advantage of that to have the AI rewrite itself to be more powerful (and so, they hope, better at doing whatever they want the AI for, of course). So even if the technical difficulties in making code hard to change that others have mentioned could be overcome, it would be very hard to convince everyone making AIs to limit them in that way.
Logicians still can’t agree whether the symbol for if and only if should be a triple bar or a double arrow. Odds that they’d all sign up for this, rather than having it be, at best, yet another competing standard, seem low.
Some components of experience, like colors, feel simple introspectively. The story of their functions is not remotely simple, so the story of their functions feels like it must be talking about a totally different thing from the obviously simple experience of the color. Though some people try to pretend this is more reasonable than it is by playing games and trying to define an experience as consisting entirely of how things seem to us and so as being incapable of being otherwise than it seems, this is just game playing; we are not that infallible on any subject, introspective or otherwise. The obvious solution, that what seems simple just turns out to be complicated and is in fact what the complicated functional story talks about, is surely the correct one. Don’t let Chalmers’ accent lull you into thinking he has some superior down under wisdom; listen to the equally accented Australian materialists!
Looking at the listed philosophers is not the best way to understand what’s going on here. The category of rationalists is not “philosophers like those guys,” it is one of a pair of opposed categories (the other being the empiricists) into which various philosophers fit to varying degrees. It is less appropriate for the ancients than for Descartes, Spinoza, and Leibniz (those three are really the paradigm rationalists). And the wikipedia article is taking a controversial position in putting Kant in the rationalist category. Kant was aware of the categories (indeed, is a major source of the tradition of grouping philosophers into those two categories), and did not consider himself to belong to either of them (his preferred terms for the categories were “dogmatists” for the rationalists and “skeptics” for the empiricists, which is probably enough on its own to give you a sense for how he viewed the two groups). There is admittedly a popular line of Kant interpretation which reads him as a kind of crypto-rationalist, but there are also those of us who read him as a crypto-empiricist, and not a few who take him at his word as being outside both categories.
In any event, the empiricist tradition has at least as much, if not more, influence on the LW wrong crowd as the rationalist tradition, and really both categories work best for early moderns and aren’t fantastic for categorizing most in the present era. So anybody familiar with the philosophical term is likely to find the application to this community initially confusing.
The healthcare system capacity shouldn’t be a flat line, though I admit that the reports I’ve seen suggest that not nearly enough effort has been devoted to ramping up to deal with the emergency. But obviously if there is an upward slope to capacity (and there are efforts to increase production of ventilators, to pick one of the most troublesome restrictions), that increases the benefit of curve flattening efforts.
Your requirements are very slightly too strong. If you have more than 6 cards in a suit, the amount of them that have to be top cards is reduced. In your second example, a spade suit of A,K,Q,8,7,6,5,4,3,2 would have served just as well, as even if all the opposing spades were in one hand, playing out the A,K,Q would force them all out, making the remaining spades also winners.
Hmmm, thanks, but that research doesn’t seem to make any effort to distinguish people with diagnosable dementia conditions from those without, and does mention that the rates can be quite different for different people, so I can’t tell whether there’s anything about it which contradicts what I thought I remembered encountering in other research.
I’m curious about your claim that at 60-70 years old people start rapidly becoming stupider for reason we don’t know. I thought that I recalled reading that while the various forms of dementia become immensely more common with age, those who are fortunate enough to avoid any of them experience relatively little cognitive decline. Unless you mean only to say that our present understanding of Alzheimer’s and the other less common dementia disorders is relatively limited, so you’re counting that as a reason we don’t know (it is certainly something we don’t know how to fix, so you win on that point).
It certainly becomes stranger when you drop a word. But either way, strangeness is rarely evidence of very much.
I suppose I am denying that they are just appearances.
The research indicates that most people’s responses to any social science result is “that’s what I would have expected,” although that doesn’t actually seem to be true; you can get them to say they expected conflicting results. Have there really been no studies of when people say they think studies are surprising, comparing the results to what people actually predicted beforehand (I know Milgram informally surveyed what people expected before his study, but I don’t think he did any rigorous analysis of expectations)? Perhaps people are as inaccurate in reporting what they find surprising as they are in reporting what they expected. It would certainly be interesting to know!
Over the course of a month? The reasons you give for thinking these stocks might go up aren’t things that would reliably manifest in such a short time frame, and the market generally has been down recently. I don’t think what you’ve described here is evidence of much of anything. Probably you’re no good at active investing, because the evidence seems to suggest that nobody is (the winners are just the ones who get lucky), but the reason to think that is because of the general evidence for that, not because of your personal experience over the past month.
A lot of biological research is inherently slow, because you have to wait to observe effects on slow processes in living things. Probably the only way to get rapid research progress on immortality is with vastly superior computer models running on vastly superior computers substituting for as much as possible of the slow observing what really goes on in humans research. Though there would probably still be a lot of slow observing what goes on in humans going on in the course of testing the computer models for accuracy. Anyway, making more powerful computers, and making better computer models of biochemistry, are already areas that get huge amounts of research spending. It seems likely that still more spending would encounter diminishing returns, such that no amount of concerted effort would further speed things up very dramatically (certainly not to the level you’re asking for). Though you might get the impression around here that everyone who isn’t a rationalist is a death lover, in fact most people want to live longer, including very rich people, and so a lot of money gets spent on pursuing that goal; lack of progress has a lot more to do with it being hard than with lack of effort.
I was under the impression that the research into biases by people like Kahneman and Tversky generally found that eliminating them was incredibly hard, and that expertise, and even familiarity with the biases in question generally didn’t help at all. So this is not a particularly surprising result; what would be more interesting is if they had found anything that actually does reduce the effect of the biases.
It is almost completely uncontroversial that meaning is not determined by the conscious intentions of individual speakers (the “Humpty Dumpty” theory is false). More sophisticated theories of meaning note that people want their words to mean the same as what other people mean by them (as otherwise they are useless for communication). So, bare minimum, knowing what a word means requires looking at a community of language users, not just one speaker. But there are more complications; people want to use their words to mean the same as what experts intend more than they want to use their words to mean the same as what the ignorant intend. Partly that may be just to make coordination easier, but probably an even bigger motive is that people want their words to pick out useful and important categories, and of course experts are more likely to have latched on to those. A relatively uncontroversial extension of this is that meaning needn’t precisely match the intentions of any current language speaker or group of language speakers; if the intentions of speakers would point to one category, but there’s a very similar, mostly overlapping, but much more useful and important category, the correct account of the meaning is probably that it refers to the more useful and important category, even if none of the speakers know enough to pick out that category. That’s why words for “fish” in languages whose origins predate any detailed biological knowledge of whales nonetheless probably shouldn’t be thought to have ever included whales in their reference.
So, people can use words without anybody knowing exactly what they mean. And figuring out what they mean can be a useful exercise, as it requires learning more about what you’re dealing with; it isn’t just a matter of making an arbitrary decision. All that being said, I admit to having some skepticism about some of the words my fellow philosophers use; I suspect in a number of cases there are no ideal, unambiguous meanings to be uncovered (indeed, there are probably cases where they don’t mean anything at all, as the Logical Positivists sometimes argued).
I thought it got off to a great start, dragged a bit in the middle (too many standard anime extremely long battles), but had a decent ending.
Because those countries also have lower labor costs, so executives can report that they’re saving money on labor costs and their company’s stock will go up. More cynically, international operations require more management (to keep on top of shipping issues and deal with different government circumstances in the different countries where operations are going on), and the managers who make such decisions may approve of an outcome where more is spent on management and less on labor. Most of the research I’ve heard of suggests that it is not because such relocations are overall more profitable; that’s very rarely the case.
Indeed. A more plausible alternative strategy for Germany would be to forget the invading Belgium plan, fight defensively on the western front, and concentrate their efforts against Russia at the beginning. Britain didn’t enter the war until the violation of Belgian neutrality. Admittedly, over time French diplomats might have found some other way to get Britain into the war, but Britain was at least initially unenthusiastic about getting involved, so I think Miller is on the right track in thinking Germany’s best hope was to look for ways to keep Britain out indefinitely.
I remember Bas van Fraassen (probably quoting or paraphrasing someone else, but I remember van Fraassen’s version) saying that the requirements for finding truth were, in decreasing order of importance, luck, courage, and technique (and this surely applies to most endeavours, not just the search for truth). But although technique comes last, it’s the one you have the most control over, so it makes sense to focus your attention there, even though its effect is the smallest. Of course, he is, like me, a philosopher, so perhaps we just share your bias toward caring about rationality.