but what do you think the correct answer is? I think the implied answer is true.
I would say with fairly high confidence that he can assign 90% probability to that and that his doing so is a fairly impressive effort in avoiding the typical human tendency toward overconfidence. I would be highly conducive to being persuaded that the actual probability given what you know is less than 90% - even hearing you give implied quantitative bounds in this post changed my mind in the direction of optimism. However given what he is able to know (including his not-knowing of logical truths due to bounded computation) his predominantly outside view estimate seems like an appropriate prediction.
It is actually only Luke’s recent declaration that access to some of your work increased his expectation that FAI success (and so non-GAI doom) is possible that allowed me to update enough that I don’t consider Holden to be erring slightly on the optimistic side (at least relative to what I know).
This sounds like you would tend to assign 90% irreducible doom probability from the best possible FAI effort. What do you think you know, and how do you think you know it?
This sounds like you would tend to assign 90% irreducible doom probability from the best possible FAI effort.
While incorrect this isn’t an unreasonable assumption—most people who make claims similar to what I have made may also have that belief. However what I have said is about what Holden believed given what he had access to and to a lesser extent, what I believed prior to reading your post. I’ve mentioned that your post constitutes significant previously unheard information about your position. I update on that kind of evidence even without knowing the details. Holden can be expected to update too but he should (probably) update less given what he knows, which relies a lot on knowledge of cause based organisations and how the people within them think.
What do you think you know, and how do you think you know it?
A far from complete list of things that I knew and still know is:
It is possible to predict human failure without knowing exactly how they will fail.
I don’t know what an O-ring is (I guess it is a circle with a hole in it). I don’t know the engineering details of any of the other parts of a spacecraft either. I would still assign a significantly greater than epsilon probability for any given flight failing catastrophically despite knowing far less than what the smartest people in the field know. That kind of thing is hard.
GAI is hard.
FAI is harder.
Both of those tasks are probably harder than anything humans have ever done.
Humans have failed at just about everything significant they tried the first time.
Humans fail at stuff even when they try really, really hard.
Humans are nearly universally too optimistic when they are planning their activities.
Those are some of the things I know, and illustrate in particular why I was shocked by this question:
I would ask him what he knows now, in advance, that all those sane intelligent people will miss.
Why on earth would you expect that Holden would know in advance what all those sane intelligent people would miss? If Holden already knew that he could just email them and they would fix it. Not knowing the point of failure is the problem.
I am still particularly interested in this question. It is a boolean question and shouldn’t be too difficult or status costly to answer. If what I know and why I think I know it are important it seems like knowing why I don’t know more could be too.
GAI is indeed hard and FAI is indeed substantially harder. (BECAUSE YOU HAVE TO USE DIFFERENT AGI COMPONENTS IN AN AI WHICH IS BEING BUILT TO COHERENT NARROW STANDARDS, NOT BECAUSE YOU SIT AROUND THINKING ABOUT CEV ALL DAY. Bolded because a lot of people seem to miss this point over and over!)
However, if you haven’t solved either of these problems, I must ask you how you know that it is harder than anything humans have ever done. It is indeed different from anything humans have ever done, and involves some new problems relative to anything humans have ever done. I can easily see how it would look more intimidating than anything you happened to think of comparing it to. But would you be scared that nine people in a basement might successfully, by dint of their insight, build a copy of the Space Shuttle? Clearly I stake quite a lot of probability mass on the problem involving less net labor than that, once you know what you’re doing. Again, though, the key insight is just that you don’t know how complex the solution will look in retrospect- as opposed to how intimidating the problem is to stare at unsolved—until after you’ve solved it. We know nine people can’t build a copy of a NASA-style Space Shuttle (at least not without nanotech) because we know how to build one.
Suppose somebody predicted with 90% probability that the first manned Space Shuttle launch would explode on the pad, even if Richard Feynman looked at it and signed off on the project, because it was big and new and different and you didn’t see how anything that big could get into orbit. Clearly they would have been wrong, and you would wonder how they got into that epistemic state in the first place. How is an FAI project disanalogous to this, if you’re pulling the 90% probability out of ignorance?
I would say with fairly high confidence that he can assign 90% probability to that and that his doing so is a fairly impressive effort in avoiding the typical human tendency toward overconfidence. I would be highly conducive to being persuaded that the actual probability given what you know is less than 90% - even hearing you give implied quantitative bounds in this post changed my mind in the direction of optimism. However given what he is able to know (including his not-knowing of logical truths due to bounded computation) his predominantly outside view estimate seems like an appropriate prediction.
It is actually only Luke’s recent declaration that access to some of your work increased his expectation that FAI success (and so non-GAI doom) is possible that allowed me to update enough that I don’t consider Holden to be erring slightly on the optimistic side (at least relative to what I know).
This sounds like you would tend to assign 90% irreducible doom probability from the best possible FAI effort. What do you think you know, and how do you think you know it?
While incorrect this isn’t an unreasonable assumption—most people who make claims similar to what I have made may also have that belief. However what I have said is about what Holden believed given what he had access to and to a lesser extent, what I believed prior to reading your post. I’ve mentioned that your post constitutes significant previously unheard information about your position. I update on that kind of evidence even without knowing the details. Holden can be expected to update too but he should (probably) update less given what he knows, which relies a lot on knowledge of cause based organisations and how the people within them think.
A far from complete list of things that I knew and still know is:
It is possible to predict human failure without knowing exactly how they will fail.
I don’t know what an O-ring is (I guess it is a circle with a hole in it). I don’t know the engineering details of any of the other parts of a spacecraft either. I would still assign a significantly greater than epsilon probability for any given flight failing catastrophically despite knowing far less than what the smartest people in the field know. That kind of thing is hard.
GAI is hard.
FAI is harder.
Both of those tasks are probably harder than anything humans have ever done.
Humans have failed at just about everything significant they tried the first time.
Humans fail at stuff even when they try really, really hard.
Humans are nearly universally too optimistic when they are planning their activities.
Those are some of the things I know, and illustrate in particular why I was shocked by this question:
Why on earth would you expect that Holden would know in advance what all those sane intelligent people would miss? If Holden already knew that he could just email them and they would fix it. Not knowing the point of failure is the problem.
I am still particularly interested in this question. It is a boolean question and shouldn’t be too difficult or status costly to answer. If what I know and why I think I know it are important it seems like knowing why I don’t know more could be too.
GAI is indeed hard and FAI is indeed substantially harder. (BECAUSE YOU HAVE TO USE DIFFERENT AGI COMPONENTS IN AN AI WHICH IS BEING BUILT TO COHERENT NARROW STANDARDS, NOT BECAUSE YOU SIT AROUND THINKING ABOUT CEV ALL DAY. Bolded because a lot of people seem to miss this point over and over!)
However, if you haven’t solved either of these problems, I must ask you how you know that it is harder than anything humans have ever done. It is indeed different from anything humans have ever done, and involves some new problems relative to anything humans have ever done. I can easily see how it would look more intimidating than anything you happened to think of comparing it to. But would you be scared that nine people in a basement might successfully, by dint of their insight, build a copy of the Space Shuttle? Clearly I stake quite a lot of probability mass on the problem involving less net labor than that, once you know what you’re doing. Again, though, the key insight is just that you don’t know how complex the solution will look in retrospect- as opposed to how intimidating the problem is to stare at unsolved—until after you’ve solved it. We know nine people can’t build a copy of a NASA-style Space Shuttle (at least not without nanotech) because we know how to build one.
Suppose somebody predicted with 90% probability that the first manned Space Shuttle launch would explode on the pad, even if Richard Feynman looked at it and signed off on the project, because it was big and new and different and you didn’t see how anything that big could get into orbit. Clearly they would have been wrong, and you would wonder how they got into that epistemic state in the first place. How is an FAI project disanalogous to this, if you’re pulling the 90% probability out of ignorance?
Thank you for explaining some of your reasoning.