Shane [Legg], unless you know that your plan leads to a good outcome, there is no point in getting there faster (and it applies to each step along the way). Outcompeting other risks only becomes relevant when you can provide a better outcome. If your plan says that you only launch an AGI when you know it’s a FAI, you can’t get there faster by omitting the FAI part. And if you do omit the FAI, you are just working for destruction, no point in getting there faster.
The amendment to your argument might say that you can get a crucial technical insight in the FAI while working on AGI. I agree with it, but work on AGI should remain a strict subgoal, neither in a “I’ll fail at it anyway, but might learn something” sense, nor “I’ll genuinely try to build an AGI”, but as “I’ll try to think about technical side of developing an AGI, in order to learn something”. Like studying statistics, machine learning, information theory, computer science, cognitive science, evolutionary psychology, neuroscience, and so on, to develop understanding of the problem of FAI, you might study your own FAI-care-free ideas on AGI. This is dangerous, but might prove useful. I don’t know how useful it is, but neither do I know how modern machine learning is useful for the same task, beyond basics. Thinking about AGI seems closer to the target than most of machine learning, but we learn machine learning anyway. The catch is that currently there is no meaningful science of AGI.
If your plan says that you only launch an AGI when you know it’s a FAI, you can’t get there faster by omitting the FAI part. And if you do omit the FAI, you are just working for destruction, no point in getting there faster.
That idea seems to be based on a “binary” model—win or lose.
It seems unlikely to me that the world will work like that. The quantity of modern information that is preserved into the far future is a continuous quantity. The probability of our descendants preserving humans instrumentally—through historical interest—also looks like a continuous quantity to me. It looks more as though there will be a range of possible outcomes—of varying desirability to existing humans.
That idea seems to be based on a “binary” model—win or lose.
Well, there are a thousand different ways to lose, but I lable any future containing “six billion corpses” as a losing one.
And remember that in the space of all possible minds, things that take marginal interest in us is a diminishing small space compared to things that readily wipe us out.
Well, there are a thousand different ways to lose, but I lable any future containing “six billion corpses” as a losing one.
You don’t think humanity would ever willingly go for destructive uploading?
I don’t think win/lose is too useful here. The idea that there are many ways to lose is like saying that most arrangements of a 747′s components don’t fly. True—but not too relevant when planning a flight.
You don’t think humanity would ever willingly go for destructive uploading?
Understand my meaning, do not cleave my words. I mean of course “six billion mind-state annihilations,” and I highly doubt you were not able to think of that interprentation.
I don’t think win/lose is too useful here. The idea that there are many ways to lose is like saying that most arrangements of a 747′s components don’t fly. True—but not too relevant when planning a flight.
But there are any number of failure points and combinations thereof that would be mission-fatal during flight.
Shane [Legg], unless you know that your plan leads to a good outcome, there is no point in getting there faster (and it applies to each step along the way). Outcompeting other risks only becomes relevant when you can provide a better outcome. If your plan says that you only launch an AGI when you know it’s a FAI, you can’t get there faster by omitting the FAI part. And if you do omit the FAI, you are just working for destruction, no point in getting there faster.
The amendment to your argument might say that you can get a crucial technical insight in the FAI while working on AGI. I agree with it, but work on AGI should remain a strict subgoal, neither in a “I’ll fail at it anyway, but might learn something” sense, nor “I’ll genuinely try to build an AGI”, but as “I’ll try to think about technical side of developing an AGI, in order to learn something”. Like studying statistics, machine learning, information theory, computer science, cognitive science, evolutionary psychology, neuroscience, and so on, to develop understanding of the problem of FAI, you might study your own FAI-care-free ideas on AGI. This is dangerous, but might prove useful. I don’t know how useful it is, but neither do I know how modern machine learning is useful for the same task, beyond basics. Thinking about AGI seems closer to the target than most of machine learning, but we learn machine learning anyway. The catch is that currently there is no meaningful science of AGI.
That idea seems to be based on a “binary” model—win or lose.
It seems unlikely to me that the world will work like that. The quantity of modern information that is preserved into the far future is a continuous quantity. The probability of our descendants preserving humans instrumentally—through historical interest—also looks like a continuous quantity to me. It looks more as though there will be a range of possible outcomes—of varying desirability to existing humans.
Well, there are a thousand different ways to lose, but I lable any future containing “six billion corpses” as a losing one.
And remember that in the space of all possible minds, things that take marginal interest in us is a diminishing small space compared to things that readily wipe us out.
You don’t think humanity would ever willingly go for destructive uploading?
I don’t think win/lose is too useful here. The idea that there are many ways to lose is like saying that most arrangements of a 747′s components don’t fly. True—but not too relevant when planning a flight.
Understand my meaning, do not cleave my words. I mean of course “six billion mind-state annihilations,” and I highly doubt you were not able to think of that interprentation.
But there are any number of failure points and combinations thereof that would be mission-fatal during flight.