The question is not what convinces you that I can do FAI within the framework of your antiheroic epistemology.
The question is what first and earliest shows that your antiheroic epistemology is yielding bad predictions
Which I said in the very same paragraph.
Is this a terrible question to ask for some reason? You’ve substituted an alternate question a couple of times now.
I already gave the example of independent judges evaluating MIRI workshop output, among others. If we make the details precise, I can set the threshold on the measure. Or we can take any number of other metrics with approximately continuous outputs where I can draw a line. But it takes work to define a metric precise enough to be solid, and I don’t want to waste my time generating more and more additional examples or making them ultra-precise without feedback on what you will actually stake a claim on.
I can’t determine what’s next without knowledge of what you’ll do or try.
I don’t suppose you can give a quick example of a DAGGRE question?
To clear up the ambiguity, does this mean you agree that I can do anything short of what von Neumann did, or that you don’t think it’s possible to get as far as independent judges favorably evaluating MIRI output, or is there some other standard you have in mind? I’m trying to get something clearly falsifiable, but right now I can’t figure out the intended event due to sheer linguistic ambiguity.
I also think that evaluation by academics is a terrible test for things that don’t come with blatant overwhwelming unmistakable undeniable-even-to-humans evidence—e.g. this standard would fail MWI, molecular nanotechnology, cryonics, and would have recently failed ‘high-carb diets are not necessarily good for you’. I don’t particularly expect this standard to be met before the end of the world, and it wouldn’t be necessary to meet it either.
To clear up the ambiguity, does this mean you agree that I can do anything short of what von Neumann did
As I said in my other comment, I would be quite surprised if your individual mathematical and AI contributions reach the levels of the best in their fields, as you are stronger verbally than mathematically, and discuss in more detail what I would find surprising and not there.
I also think that evaluation by academics is a terrible test for things that don’t come with blatant overwhwelming unmistakable undeniable-even-to-humans evidence—e.g. this standard would fail MWI, molecular nanotechnology, cryonics, and would have recently failed ‘high-carb diets are not necessarily good for you’.
I recently talked to Drexler about nanotechnology in Oxford. Nanotechnology is
Way behind Drexler’s schedule, and even accounting for there being far less funding and focused research than he expected, the timeline skeptics get significant vindication
Was said by the NAS panel to be possible, with no decisive physical or chemical arguments against (and discussion of some uncertainties which would not much change the overall picture, in any case), and arguments against tend to be or turn into timeline skepticism and skepticism about the utility of research
Has not been the subject of a more detailed report or expert judgment test than the National Academy of Sciences one (which said it’s possible) because Drexler was not on the ball and never tried. He is currently working with the FHI to get a panel of independent eminent physicists and chemists to work it over, and expects them to be convinced.
Which I said in the very same paragraph.
I already gave the example of independent judges evaluating MIRI workshop output, among others. If we make the details precise, I can set the threshold on the measure. Or we can take any number of other metrics with approximately continuous outputs where I can draw a line. But it takes work to define a metric precise enough to be solid, and I don’t want to waste my time generating more and more additional examples or making them ultra-precise without feedback on what you will actually stake a claim on.
I can’t determine what’s next without knowledge of what you’ll do or try.
http://blog.daggre.org/tag/prediction-market/
To clear up the ambiguity, does this mean you agree that I can do anything short of what von Neumann did, or that you don’t think it’s possible to get as far as independent judges favorably evaluating MIRI output, or is there some other standard you have in mind? I’m trying to get something clearly falsifiable, but right now I can’t figure out the intended event due to sheer linguistic ambiguity.
I also think that evaluation by academics is a terrible test for things that don’t come with blatant overwhwelming unmistakable undeniable-even-to-humans evidence—e.g. this standard would fail MWI, molecular nanotechnology, cryonics, and would have recently failed ‘high-carb diets are not necessarily good for you’. I don’t particularly expect this standard to be met before the end of the world, and it wouldn’t be necessary to meet it either.
As I said in my other comment, I would be quite surprised if your individual mathematical and AI contributions reach the levels of the best in their fields, as you are stronger verbally than mathematically, and discuss in more detail what I would find surprising and not there.
I recently talked to Drexler about nanotechnology in Oxford. Nanotechnology is
Way behind Drexler’s schedule, and even accounting for there being far less funding and focused research than he expected, the timeline skeptics get significant vindication
Was said by the NAS panel to be possible, with no decisive physical or chemical arguments against (and discussion of some uncertainties which would not much change the overall picture, in any case), and arguments against tend to be or turn into timeline skepticism and skepticism about the utility of research
Has not been the subject of a more detailed report or expert judgment test than the National Academy of Sciences one (which said it’s possible) because Drexler was not on the ball and never tried. He is currently working with the FHI to get a panel of independent eminent physicists and chemists to work it over, and expects them to be convinced.