This kind of seems like political slander to me. Maybe I’m miscalibrated? But it seems like you’re thinking of “reasonable estimates” as things produced by groups or factions, treating SI as a single “estimate” in this sense, and lumping them with a vaguely negative but non-specified reference class of “prophetic groups”.
The packaged claims function to reduce SI’s organizational credibility, and yet it references no external evidence and makes no testable claims. For your “prophetic groups” reference class, does it include 1930′s nuclear activists, 1950′s environmentalists, or 1970′s nanotechnology activists? Those examples come from the socio-political reference class I generally think of SI as belonging to, and I think of them in a mostly positive way.
Personally, I prefer to think of “estimates” as specific predictions produced by specific processes at specific times, and they seem like they should be classified as “reasonable” or not on the basis of their mechanisms and grounding in observables in the past and the future.
The politics and social dynamics surrounding an issue can give you hints about what’s worth thinking about, but ultimately you have to deal with the object level issues, and the object level issues will screen off the politics and social dynamics once you process them. The most reasonable tool for extracting a “coherent opinion” from someone on the subject of AGI that is available to the public that I’m aware of is the uncertain future.
(Endgame: Singularity is a more interesting tool in some respects. It’s interesting for building intuitions about certain kinds of reality/observable correlations because it has you play as a weak but essentially benevolent AGI rather than as humanity, but (1) it is ridiculously over-specific as a prediction tool, and (2) seems to give the AGI certain unrealistic advantages and disadvantages for the sake of making it more fun as a game. I’ve had a vague thought to fork it, try to change it to be more realistic, write a bot for playing it, and use that as an engine for Monte-carlo simulator of singularity scenarios. Alas: a day job prevents me from having the time, and if that constraint were removed I bet I could find many higher value things to work on, reality being what it is, and people being motivated to action the way they are.)
Do you know of anything more epistemically helpful than the uncertain future? If so, can you tell me about it? If not, could you work through it and say how it affected your model of the world?
(Note that the Uncertain Future software is mostly supposed to be a conceptual demonstration; as mentioned in the accompanying conference paper, a better probabilistic forecasting guide would take historical observations and uncertainty about constant underlying factors into account more directly, with Bayesian model structure. The most important part of this would be stochastic differential equation model components that could account for both parameter and state uncertainty in nonlinear models of future economic development from past observations, especially of technology performance curves and learning curves. Robin Hanson’s analysis of the random properties of technological growth modes has something of a similar spirit.)
This kind of seems like political slander to me. Maybe I’m miscalibrated? But it seems like you’re thinking of “reasonable estimates” as things produced by groups or factions, treating SI as a single “estimate” in this sense, and lumping them with a vaguely negative but non-specified reference class of “prophetic groups”.
The packaged claims function to reduce SI’s organizational credibility, and yet it references no external evidence and makes no testable claims. For your “prophetic groups” reference class, does it include 1930′s nuclear activists, 1950′s environmentalists, or 1970′s nanotechnology activists? Those examples come from the socio-political reference class I generally think of SI as belonging to, and I think of them in a mostly positive way.
Personally, I prefer to think of “estimates” as specific predictions produced by specific processes at specific times, and they seem like they should be classified as “reasonable” or not on the basis of their mechanisms and grounding in observables in the past and the future.
The politics and social dynamics surrounding an issue can give you hints about what’s worth thinking about, but ultimately you have to deal with the object level issues, and the object level issues will screen off the politics and social dynamics once you process them. The most reasonable tool for extracting a “coherent opinion” from someone on the subject of AGI that is available to the public that I’m aware of is the uncertain future.
(Endgame: Singularity is a more interesting tool in some respects. It’s interesting for building intuitions about certain kinds of reality/observable correlations because it has you play as a weak but essentially benevolent AGI rather than as humanity, but (1) it is ridiculously over-specific as a prediction tool, and (2) seems to give the AGI certain unrealistic advantages and disadvantages for the sake of making it more fun as a game. I’ve had a vague thought to fork it, try to change it to be more realistic, write a bot for playing it, and use that as an engine for Monte-carlo simulator of singularity scenarios. Alas: a day job prevents me from having the time, and if that constraint were removed I bet I could find many higher value things to work on, reality being what it is, and people being motivated to action the way they are.)
Do you know of anything more epistemically helpful than the uncertain future? If so, can you tell me about it? If not, could you work through it and say how it affected your model of the world?
(Note that the Uncertain Future software is mostly supposed to be a conceptual demonstration; as mentioned in the accompanying conference paper, a better probabilistic forecasting guide would take historical observations and uncertainty about constant underlying factors into account more directly, with Bayesian model structure. The most important part of this would be stochastic differential equation model components that could account for both parameter and state uncertainty in nonlinear models of future economic development from past observations, especially of technology performance curves and learning curves. Robin Hanson’s analysis of the random properties of technological growth modes has something of a similar spirit.)