I think it will be incidental to AGI. That is, by the time you are approaching human-level AGI it will be essentially obvious (to the sort of person who groks human-level AGI in the first place). Motivation (as a component of the process of thinking) is integral to AGI, not some extra thing only humans and animals happen to have. Motivation needs be grokked before you will have AGI in the first place. Human motivational structure is quite complex, with far more alterior motives (clan affiliation, reproduction, etc) than straightforward ones. AGIs needn’t be so-burdened, which in many ways makes the FAI problem easier in fact than our human-based intuition might surmise. On the other hand, simple random variation is a huge risk—that is, no matter the intentional agenda, there is always the possibility that a simple error will put that very abstract coefficient of feedback over unity, and then you have a problem. If AGI weren’t going to happen regardless, I might say it’s worthy of a debate now what the nature of that problem would be (but in that debate, I still say it’s not a huge problem—it’s not instantaneous FOOM, it’s time-to-uplug FOOM; and you have the advantage of other FAIs by then with full ability to analyze each other so you actually have a lot of tools available to put out fires long before they’re raging); but AGI is going to happen regardless, so the race is not FAI vs. AGI, but whether the first to solve AGI wants FAI or something else. And like I say, there is also the race against our own inevitable demise of old age (talk to anybody who’s been in the longevity community for > 20 years and you will learn they once had your optimism about progress).
Don’t get me wrong, FAI is not an uninteresting problem. My claim is quite simply that for the goals of the FAI community (which I have to assume includes your own long-term survival), y’all would do far better to be working (hard and seriously) on AGI than not. All of this sofa-think today will be replicated in short order by better-informed consideration down the road. And I aint sayin’ don’t think about it today—I’m saying find a realistic balance between FAI and AGI research that doesn’t leave you so far behind the game that your goals never get to matter, and I’m sayin’ that’s 99% AGI research and 1% FAI (for now). (And no, that doesn’t mean 99 people doing AGI and 1 doing FAI. My point is the 1 doing FAI is useless if they aren’t 99% steeped in AGI from which to think about FAI in the first place.)
How not hard is it? How long do you think it would take you to solve it?
I think it will be incidental to AGI. That is, by the time you are approaching human-level AGI it will be essentially obvious (to the sort of person who groks human-level AGI in the first place). Motivation (as a component of the process of thinking) is integral to AGI, not some extra thing only humans and animals happen to have. Motivation needs be grokked before you will have AGI in the first place. Human motivational structure is quite complex, with far more alterior motives (clan affiliation, reproduction, etc) than straightforward ones. AGIs needn’t be so-burdened, which in many ways makes the FAI problem easier in fact than our human-based intuition might surmise. On the other hand, simple random variation is a huge risk—that is, no matter the intentional agenda, there is always the possibility that a simple error will put that very abstract coefficient of feedback over unity, and then you have a problem. If AGI weren’t going to happen regardless, I might say it’s worthy of a debate now what the nature of that problem would be (but in that debate, I still say it’s not a huge problem—it’s not instantaneous FOOM, it’s time-to-uplug FOOM; and you have the advantage of other FAIs by then with full ability to analyze each other so you actually have a lot of tools available to put out fires long before they’re raging); but AGI is going to happen regardless, so the race is not FAI vs. AGI, but whether the first to solve AGI wants FAI or something else. And like I say, there is also the race against our own inevitable demise of old age (talk to anybody who’s been in the longevity community for > 20 years and you will learn they once had your optimism about progress).
Don’t get me wrong, FAI is not an uninteresting problem. My claim is quite simply that for the goals of the FAI community (which I have to assume includes your own long-term survival), y’all would do far better to be working (hard and seriously) on AGI than not. All of this sofa-think today will be replicated in short order by better-informed consideration down the road. And I aint sayin’ don’t think about it today—I’m saying find a realistic balance between FAI and AGI research that doesn’t leave you so far behind the game that your goals never get to matter, and I’m sayin’ that’s 99% AGI research and 1% FAI (for now). (And no, that doesn’t mean 99 people doing AGI and 1 doing FAI. My point is the 1 doing FAI is useless if they aren’t 99% steeped in AGI from which to think about FAI in the first place.)