What does ELK stand for here?
JoshuaZ
This is probably the best argument I have seen yet for being concerned about what things like GPT are going to be able to do. Very eye opening.
66.42512077294685%
This should not be reported this way. It should be reported as something like 66%. The other digits are not meaningful.
I don’t know of any broader, larger trends. It is worth noting here that the Rabbis of the Talmud themselves thought that the prior texts (especially the Torah itself) were infallible, so it seems that part of what might be happening is that over time, more and more gets put into the very-holy-text category.
Also, it seems important to distinguish here between being unquestionably correct with being literal. In a variety of different religions this becomes an important distinction and often a sacrifice of literalism is in practice made to preserve correctness of a claim past a certain point. Also note that in many religious traditions, the traditions which are most literal try to argue that what they are doing is not literalism but something more sophisticated. For example, among conservative Protestants it isn’t uncommon to claim that they are not reading texts literally but rather using the “historical-grammatical method.”
MWI doesn’t say anything about other constants- the other parts of our wavefunction should have the same constants. However, other multiverse hypotheses do suggest that physical constants could eb different.
That seems like an accurate analysis.
I’m actually more concerned about an error in logic. If one estimates a probability of say k that in a given year that climate change will cause an extinction event, then the probability of it occurring in any given string of years is not the obvious one, since part of what is going on in estimating k is the chance that climate change can in fact cause such an incident.
Mainstream discussion of existential risk is becoming more of a thing, A recent example is this article in The Atlantic. They do mention a variety of risks but focus on nuclear war and worst case global warming.
When people arguing with VoiceOfRa got several downvotes in a row, the conclusion drawn was sockpuppets.
There was substantially more evidence that VoiceOfRa was downvoting in a retributive fashion, including database evidence.
Slashdot had Karma years before Reddit and was not nearly as successful. Granted it didn’t try to do general forum discussions but just news articles, but this suggests that karma is not the whole story.
Further possible evidence for a Great Filter: A recent paper suggests that as long as the probability of an intelligent species arising on a habitable planet is not tiny, at least about 10^-24 then with very high probability humans are not the only civilization to have ever been in the observable universe, and a similar result holds for the Milky Way with around 10^-10 as the relevant probability. Article about paper is here and paper is here.
The most interesting unknown in the future is the time of creation of Strong AI. Our priors are insufficient to predict it because it is such a unique task.
I’m not sure this follows. The primary problems with predicting the rise of Strong AI apply to most other artificial existential risks also.
Research on expert judgement indicates experts are just as bad as nonexperts in some counterintuitive ways, like predicting the outcome of a thing,
Do you have a citation for this? My understanding was that in many fields experts perform better than nonexperts. The main thing that experts share in common with non-experts is overconfidence about their predictions.
If people want to lock in their predictions they can do so on Prediction Book here.
I am not making claims about “any sense of order”, but going by what I read European police lost control of some chunks of its territory.
In this context that’s what relevant, since VoiceOfRa talked about “European countries that have given up enforcing any sense of order in large parts of their major cities.” If you aren’t talking about that then how is it a relevant response?
Can you explain why you see a SETI attack as so high? If you are civilization doing this not only does it require extremely hostile motivations but also a) making everyone aware of where you are (making you a potential target) and b) being able to make extremely subtle aspects of an AI that apparently looks non-hostile and c) is something which declares your own deep hostility to anyone who notices it.
What probability do you assign to this happening? How many conjunctions are involved in this scenario?
Yes, that would work. I think I was reacting to the phrasing more and imagined something more cartoonish, in particularly where the air conditioner is essentially floating in space.
You seem to be operating under the impression that subjective Bayesians think you Bayesian statistical tools are always the best tools to use in different practical situations? That’s likely true of many subjective Bayesians, but I don’t think it’s true of most “Less Wrong Bayesians.”
I suspect that there’s a large amount of variation in what “Less Wrong Bayesians” believe. It also seems that at least some treating it more as an article of faith or tribal allegiance than anything else. See for example some of the discussion here.
What do you see as productive in asking this question?
I’m upvoting this because the community could use more content commonly held views, and some people do need to treat Eliezer as more fallible than they do.
That said, I find most of your examples unpersuasive. With the exception of some aspects of p-zombies, where you do show that Eliezer has misinterpreted what people are saying when they make this sort of argument, most of your arguments are not compelling arguments at all that Eliezer is wrong, although they do point to his general overconfidence (which seems to be a serious problem).
For what it is worth, one of my very first comments [Was objecting to Eliezer’s use of phlogiston as an example of a hypothesis that did not generate predictions](https://www.lesswrong.com/posts/RgkqLqkg8vLhsYpfh/fake-causality?commentId=4Jch5m8wNg8pHrAAF).