Which is why I repeat my question: What is the least impossible thing I could do next, where anything up to that is permitted by your model so it’s equivalent to affirming that you think I might be able to do it, and anything beyond that was prohibited by your model so it’s time to notice your confusion?
So, if von Neumann came out with similar FAI claims, but couldn’t present compelling arguments to his peers (if not to exact agreement, perhaps within an order of magnitude) I wouldn’t believe him. So showing that, e.g. your math problem-solving ability is greater than my point estimate, wouldn’t be very relevant. Shocking achievements would lead me to upgrade my estimate of your potential contribution going forward (although most of the work in an FAI team would be done by others in any case), resolving uncertainty about ability, but that would not be enough as such, it would have to be the effect on my estimates of your predictive model.
I would make predictions on evaluations of MIRI workshop research outputs by a properly constructed jury of AI people. If the MIRI workshops were many times more productive than comparably or better credentialed AI people according to independent expert judges (blinded to the extent possible) I would say my model was badly wrong, but I don’t think you would predict a win on that.
To avoid “too much work to do/prep for” and “disagreement about far future consequences of mundane predicted intermediates” you could give me a list of things that you or MIRI plan to attempt over the next 1, 3, and 5 years and I could pick one (with some effort to make it more precise).
DAGGRE...etc
Yes, I have seen you writing about the 80k quiz on LW and 80k and elsewhere, it’s good (although as you mention, test-taking skills went far on it). I predict that if we take an unbiased sample of people with similarly high cognitive test scores, extensive exposure to machine learning, and good career success (drawn from academia and tech/quant finance, say), and look at the top scorers on the 80k quiz and similar, their estimates for MIRI success will quite a bit closer to mine than yours. Do you disagree? Otherwise, I would want to see drastic outperformance relative to such a group on a higher-ceiling version (although this would be confounded by advance notice and the opportunity to study/prepare).
DAGGRE is going into the area of technology, not just geopolitics. Unfortunately it is mostly short term stuff, not long-term basic science, or subtle properties of future tech, so the generalization is imperfect. Also, would you predict exceptional success in predicting short-medium term technological developments?
So, if von Neumann came out with similar FAI claims...
...showing that, e.g. your math problem-solving ability is greater than my point estimate, wouldn’t be very relevant.
The question is not what convinces you that I can do FAI within the framework of your antiheroic epistemology. The question is what first and earliest shows that your antiheroic epistemology is yielding bad predictions. Is this a terrible question to ask for some reason? You’ve substituted an alternate question a couple of times now.
Also, would you predict exceptional success in predicting short-medium term technological developments?
From my perspective, you just asked how bad other people are at predicting such developments. The answer is that I don’t know. Certainly many bloggers are terrible at it. I don’t suppose you can give a quick example of a DAGGRE question?
The question is not what convinces you that I can do FAI within the framework of your antiheroic epistemology.
The question is what first and earliest shows that your antiheroic epistemology is yielding bad predictions
Which I said in the very same paragraph.
Is this a terrible question to ask for some reason? You’ve substituted an alternate question a couple of times now.
I already gave the example of independent judges evaluating MIRI workshop output, among others. If we make the details precise, I can set the threshold on the measure. Or we can take any number of other metrics with approximately continuous outputs where I can draw a line. But it takes work to define a metric precise enough to be solid, and I don’t want to waste my time generating more and more additional examples or making them ultra-precise without feedback on what you will actually stake a claim on.
I can’t determine what’s next without knowledge of what you’ll do or try.
I don’t suppose you can give a quick example of a DAGGRE question?
To clear up the ambiguity, does this mean you agree that I can do anything short of what von Neumann did, or that you don’t think it’s possible to get as far as independent judges favorably evaluating MIRI output, or is there some other standard you have in mind? I’m trying to get something clearly falsifiable, but right now I can’t figure out the intended event due to sheer linguistic ambiguity.
I also think that evaluation by academics is a terrible test for things that don’t come with blatant overwhwelming unmistakable undeniable-even-to-humans evidence—e.g. this standard would fail MWI, molecular nanotechnology, cryonics, and would have recently failed ‘high-carb diets are not necessarily good for you’. I don’t particularly expect this standard to be met before the end of the world, and it wouldn’t be necessary to meet it either.
To clear up the ambiguity, does this mean you agree that I can do anything short of what von Neumann did
As I said in my other comment, I would be quite surprised if your individual mathematical and AI contributions reach the levels of the best in their fields, as you are stronger verbally than mathematically, and discuss in more detail what I would find surprising and not there.
I also think that evaluation by academics is a terrible test for things that don’t come with blatant overwhwelming unmistakable undeniable-even-to-humans evidence—e.g. this standard would fail MWI, molecular nanotechnology, cryonics, and would have recently failed ‘high-carb diets are not necessarily good for you’.
I recently talked to Drexler about nanotechnology in Oxford. Nanotechnology is
Way behind Drexler’s schedule, and even accounting for there being far less funding and focused research than he expected, the timeline skeptics get significant vindication
Was said by the NAS panel to be possible, with no decisive physical or chemical arguments against (and discussion of some uncertainties which would not much change the overall picture, in any case), and arguments against tend to be or turn into timeline skepticism and skepticism about the utility of research
Has not been the subject of a more detailed report or expert judgment test than the National Academy of Sciences one (which said it’s possible) because Drexler was not on the ball and never tried. He is currently working with the FHI to get a panel of independent eminent physicists and chemists to work it over, and expects them to be convinced.
So, if von Neumann came out with similar FAI claims, but couldn’t present compelling arguments to his peers (if not to exact agreement, perhaps within an order of magnitude) I wouldn’t believe him. So showing that, e.g. your math problem-solving ability is greater than my point estimate, wouldn’t be very relevant. Shocking achievements would lead me to upgrade my estimate of your potential contribution going forward (although most of the work in an FAI team would be done by others in any case), resolving uncertainty about ability, but that would not be enough as such, it would have to be the effect on my estimates of your predictive model.
I would make predictions on evaluations of MIRI workshop research outputs by a properly constructed jury of AI people. If the MIRI workshops were many times more productive than comparably or better credentialed AI people according to independent expert judges (blinded to the extent possible) I would say my model was badly wrong, but I don’t think you would predict a win on that.
To avoid “too much work to do/prep for” and “disagreement about far future consequences of mundane predicted intermediates” you could give me a list of things that you or MIRI plan to attempt over the next 1, 3, and 5 years and I could pick one (with some effort to make it more precise).
Yes, I have seen you writing about the 80k quiz on LW and 80k and elsewhere, it’s good (although as you mention, test-taking skills went far on it). I predict that if we take an unbiased sample of people with similarly high cognitive test scores, extensive exposure to machine learning, and good career success (drawn from academia and tech/quant finance, say), and look at the top scorers on the 80k quiz and similar, their estimates for MIRI success will quite a bit closer to mine than yours. Do you disagree? Otherwise, I would want to see drastic outperformance relative to such a group on a higher-ceiling version (although this would be confounded by advance notice and the opportunity to study/prepare).
DAGGRE is going into the area of technology, not just geopolitics. Unfortunately it is mostly short term stuff, not long-term basic science, or subtle properties of future tech, so the generalization is imperfect. Also, would you predict exceptional success in predicting short-medium term technological developments?
The question is not what convinces you that I can do FAI within the framework of your antiheroic epistemology. The question is what first and earliest shows that your antiheroic epistemology is yielding bad predictions. Is this a terrible question to ask for some reason? You’ve substituted an alternate question a couple of times now.
From my perspective, you just asked how bad other people are at predicting such developments. The answer is that I don’t know. Certainly many bloggers are terrible at it. I don’t suppose you can give a quick example of a DAGGRE question?
Which I said in the very same paragraph.
I already gave the example of independent judges evaluating MIRI workshop output, among others. If we make the details precise, I can set the threshold on the measure. Or we can take any number of other metrics with approximately continuous outputs where I can draw a line. But it takes work to define a metric precise enough to be solid, and I don’t want to waste my time generating more and more additional examples or making them ultra-precise without feedback on what you will actually stake a claim on.
I can’t determine what’s next without knowledge of what you’ll do or try.
http://blog.daggre.org/tag/prediction-market/
To clear up the ambiguity, does this mean you agree that I can do anything short of what von Neumann did, or that you don’t think it’s possible to get as far as independent judges favorably evaluating MIRI output, or is there some other standard you have in mind? I’m trying to get something clearly falsifiable, but right now I can’t figure out the intended event due to sheer linguistic ambiguity.
I also think that evaluation by academics is a terrible test for things that don’t come with blatant overwhwelming unmistakable undeniable-even-to-humans evidence—e.g. this standard would fail MWI, molecular nanotechnology, cryonics, and would have recently failed ‘high-carb diets are not necessarily good for you’. I don’t particularly expect this standard to be met before the end of the world, and it wouldn’t be necessary to meet it either.
As I said in my other comment, I would be quite surprised if your individual mathematical and AI contributions reach the levels of the best in their fields, as you are stronger verbally than mathematically, and discuss in more detail what I would find surprising and not there.
I recently talked to Drexler about nanotechnology in Oxford. Nanotechnology is
Way behind Drexler’s schedule, and even accounting for there being far less funding and focused research than he expected, the timeline skeptics get significant vindication
Was said by the NAS panel to be possible, with no decisive physical or chemical arguments against (and discussion of some uncertainties which would not much change the overall picture, in any case), and arguments against tend to be or turn into timeline skepticism and skepticism about the utility of research
Has not been the subject of a more detailed report or expert judgment test than the National Academy of Sciences one (which said it’s possible) because Drexler was not on the ball and never tried. He is currently working with the FHI to get a panel of independent eminent physicists and chemists to work it over, and expects them to be convinced.