But this is much too large a project for me to undertake now.
Too bad. I was excited about this post and thought it was a good sign that you took that path and that it would be highly promising to pursue it further.
Another worry is that putting so many made-up probabilities into a probability tree like this is not actually that helpful. I’m not sure if that’s true, but I’m worried about it.
I’m rather pessimistic about that. I think that basically the tree branches into a very huge number of possibilities; one puts the possibilities into categories, but has no way of finding total probabilities within the categories.
Furthermore, the categories themselves do not correspond to technological effort; there is the FAIs that resulted from regular AI effort via some rather simple insight by the scientist that came up with AI, the insights that may be only visible up close when one is going over an AI and figuring out why the previous version killed half of itself repeatedly instead of self improving, or other cases the probability of which we can’t guess at without knowing how the AI is being implemented and what sort of great filters does the AI have to pass before it fooms. And there are uFAIs that result from the FAI effort; those are the uFAIs of entirely different kind, with their own entirely different probabilities that can’t be guessed at.
The intuitions are often very wrong, for example the intuitions about what ‘random’ designs do; firstly, our designs are not random, and secondarily, random code predominantly crashes, and the non crashing space is utterly dominated by one or two simplest noncrash behaviours, and same may be true of the goal systems which pass the filters of not crashing, and recursively self improving over enormous range. The filters are specific to the AI architecture.
The existence of filters, in my opinion, entirely thwarts any generic intuitions and generic arguments. The unknown , highly complex filters are an enormous sea between the logic and the probability estimates, in the land of inferences. The illogic, sadly, does not need to pass through the sea, and rapidly suggests the numbers that are not, in any way, linked to the relevant issue-set.
Too bad. I was excited about this post and thought it was a good sign that you took that path and that it would be highly promising to pursue it further.
Another worry is that putting so many made-up probabilities into a probability tree like this is not actually that helpful. I’m not sure if that’s true, but I’m worried about it.
I’m rather pessimistic about that. I think that basically the tree branches into a very huge number of possibilities; one puts the possibilities into categories, but has no way of finding total probabilities within the categories.
Furthermore, the categories themselves do not correspond to technological effort; there is the FAIs that resulted from regular AI effort via some rather simple insight by the scientist that came up with AI, the insights that may be only visible up close when one is going over an AI and figuring out why the previous version killed half of itself repeatedly instead of self improving, or other cases the probability of which we can’t guess at without knowing how the AI is being implemented and what sort of great filters does the AI have to pass before it fooms. And there are uFAIs that result from the FAI effort; those are the uFAIs of entirely different kind, with their own entirely different probabilities that can’t be guessed at.
The intuitions are often very wrong, for example the intuitions about what ‘random’ designs do; firstly, our designs are not random, and secondarily, random code predominantly crashes, and the non crashing space is utterly dominated by one or two simplest noncrash behaviours, and same may be true of the goal systems which pass the filters of not crashing, and recursively self improving over enormous range. The filters are specific to the AI architecture.
The existence of filters, in my opinion, entirely thwarts any generic intuitions and generic arguments. The unknown , highly complex filters are an enormous sea between the logic and the probability estimates, in the land of inferences. The illogic, sadly, does not need to pass through the sea, and rapidly suggests the numbers that are not, in any way, linked to the relevant issue-set.