I recently read Robin Hanson saying that small groups that hold counter-intuitive beliefs tend to come from very specific arguments, some even invent their own terminology, and outsiders who reject those beliefs often they don’t even bother to learn the terminology and review the specific arguments because they reject these beliefs on purely general grounds.
And this is what I myself noticed, however, only from within such a group, I was outraged and annoyed by the fact that usually the arguments of those who reject FOOM or the danger of AI / AGI in general are so general that they have nothing to do with the topic at all.
On the other hand, Robin Hanson gives an example like conspiracy theories, and this is true, and this is something that I did not think about, because really, when I come across a conspiracy theory, I don’t even try to spend time studying it, I just immediately reject it on a general basis and move on.
This may well be called the application of an outside view and a reference class, however, the point is not that I use this as an argument in an argument (arguing with a conspiracy theorist is a priori useless, because in order to believe in it, you need not understand scientific method), I use this to avoid even wasting time on a more precise but longer look inside.
If I were to take the time to consider a conspiracy theory, then I would not dismiss all arguments on the grounds that your idea belongs to the reference class of conspiracy theories and your view from the inside is just your vision under cognitive distortions. I would apply an inside view, understand the specific arguments, reject them on the basis of scientific evidence.
And I would use the view from the outside in the form of a reference class of conspiracy theories only as not very weighty a priori information.
So the problem is not that I have an impenetrable barrier of reference classes, but that this narrow group goes beyond the reference class of ordinary conspiracy theorists. And, perhaps, without fail, entered the reference class of those who are well acquainted with the scientific method, because without this I will decide that they are just one of a slightly smaller number of non-standard conspiracy theory groups.
And, I’m not entirely sure it’s worth doing it in one post, but it also talked about the cases of seeing a sharp change in values about reaching full strength. Maybe not in the economy, but in politics this happens regularly, at first the politician is all so kind, he does only good, and then we finally give him all the power, we make him an absolute dictator, and for some reason he suddenly stops behaving so positively, arranges repressions, starts wars.
And more about Hanson’s sketch about the explosion of “betterness”. And at the end, the question is “Really?”. And personally, I answer this: well … Yes. Right. But only the human brain works about 20M times slower than a computer processor, so it will take about the same amount more time.
If for a computer I would be ready to believe in a period from 1 month to 1 millisecond, then for a person it would be from 1.5M years to 6 hours, but the second is subject to the ability to instantaneous self-modification, including that any new idea instantly updates generally all your previous conclusions.
Plus, in the intermediate scenarios, you still need to take into account sleep, food, willpower, the need to invent a cure for old age, and the elimination of other causes of death. In addition, you will have to hide your theory of superiority from other people so that they do not decide to stop you before seizing power.
But in general, yes, I think that this is possible for people, just a whole bunch of problems like a lack of will, the lack of the possibility of self-modification and terrible for this evolutionary spaghetti code in the brain. But first of all, a too short life span, if people lived a million years, or better a billion, so that with a margin, they could achieve all this, despite all the difficulties, except death.
However, individuals live too little, think too little, so the explosion of betterness is only possible for them collectively, it has been going on for 300 years, from 3 to 15 generations and involves 8 billion people at its peak.
More specifically, I would consider this “betterness explosion” a broader concept than the “intelligence explosion” in the sense that it does not specifically talk about intelligence, but only about the concept of optimization processes in general, but at the same time more narrow, because it seems to have specific values, positive human ones, while a nuclear explosion could also be called an optimization explosion, but by no means aligned with human values.
I recently read Robin Hanson saying that small groups that hold counter-intuitive beliefs tend to come from very specific arguments, some even invent their own terminology, and outsiders who reject those beliefs often they don’t even bother to learn the terminology and review the specific arguments because they reject these beliefs on purely general grounds. And this is what I myself noticed, however, only from within such a group, I was outraged and annoyed by the fact that usually the arguments of those who reject FOOM or the danger of AI / AGI in general are so general that they have nothing to do with the topic at all. On the other hand, Robin Hanson gives an example like conspiracy theories, and this is true, and this is something that I did not think about, because really, when I come across a conspiracy theory, I don’t even try to spend time studying it, I just immediately reject it on a general basis and move on. This may well be called the application of an outside view and a reference class, however, the point is not that I use this as an argument in an argument (arguing with a conspiracy theorist is a priori useless, because in order to believe in it, you need not understand scientific method), I use this to avoid even wasting time on a more precise but longer look inside. If I were to take the time to consider a conspiracy theory, then I would not dismiss all arguments on the grounds that your idea belongs to the reference class of conspiracy theories and your view from the inside is just your vision under cognitive distortions. I would apply an inside view, understand the specific arguments, reject them on the basis of scientific evidence. And I would use the view from the outside in the form of a reference class of conspiracy theories only as not very weighty a priori information. So the problem is not that I have an impenetrable barrier of reference classes, but that this narrow group goes beyond the reference class of ordinary conspiracy theorists. And, perhaps, without fail, entered the reference class of those who are well acquainted with the scientific method, because without this I will decide that they are just one of a slightly smaller number of non-standard conspiracy theory groups.
And, I’m not entirely sure it’s worth doing it in one post, but it also talked about the cases of seeing a sharp change in values about reaching full strength. Maybe not in the economy, but in politics this happens regularly, at first the politician is all so kind, he does only good, and then we finally give him all the power, we make him an absolute dictator, and for some reason he suddenly stops behaving so positively, arranges repressions, starts wars.
And more about Hanson’s sketch about the explosion of “betterness”. And at the end, the question is “Really?”. And personally, I answer this: well … Yes. Right. But only the human brain works about 20M times slower than a computer processor, so it will take about the same amount more time. If for a computer I would be ready to believe in a period from 1 month to 1 millisecond, then for a person it would be from 1.5M years to 6 hours, but the second is subject to the ability to instantaneous self-modification, including that any new idea instantly updates generally all your previous conclusions. Plus, in the intermediate scenarios, you still need to take into account sleep, food, willpower, the need to invent a cure for old age, and the elimination of other causes of death. In addition, you will have to hide your theory of superiority from other people so that they do not decide to stop you before seizing power. But in general, yes, I think that this is possible for people, just a whole bunch of problems like a lack of will, the lack of the possibility of self-modification and terrible for this evolutionary spaghetti code in the brain. But first of all, a too short life span, if people lived a million years, or better a billion, so that with a margin, they could achieve all this, despite all the difficulties, except death. However, individuals live too little, think too little, so the explosion of betterness is only possible for them collectively, it has been going on for 300 years, from 3 to 15 generations and involves 8 billion people at its peak. More specifically, I would consider this “betterness explosion” a broader concept than the “intelligence explosion” in the sense that it does not specifically talk about intelligence, but only about the concept of optimization processes in general, but at the same time more narrow, because it seems to have specific values, positive human ones, while a nuclear explosion could also be called an optimization explosion, but by no means aligned with human values.