I disagree; reading Paul’s description made it clear to me how superficial it is to want to solve a problem by creating an army of uploads to do it for you. You may as well just try to solve the problem here and now, rather than hoping to outsource it to a bunch of nonexistent human-simulations running on nonexistent hardware. The only reason to consider such a baroque way of solving a problem is if you expect to be very pressed for time and yet to also have access to superdupercomputing power. You know, the world is hurtling towards singularity, no-one has crossed the finish line but many people are getting close, your FAI research organization manages to get a hold of a few petaflops on which to run a truncated AIXI problem-solver… and now you can finally go dig up that scrap of paper on which your team wrote down, years before, the perfectly optimal wish: “I want you, FAI-precursor, to do what the ethically stabilized members of our team would do, if they had hundreds of years to think about it, and if they...”, etcetera.
It’s a logically possible scenario, but is it remotely likely? This absolutely should not be the paradigm for a successful implementation of FAI or CEV. It’s just a wacky contingency that you might want to spend a little time thinking about. The plan should be that un-uploaded people will figure out what to do. They will surely make intensive use of computers, and there may be some big final calculation in which the schematics of human genetic, neural and cultural architecture are the inputs to a reflective optimization process; but you shouldn’t imagine that, like some bunch of Greg Egan characters, the researchers are going to successfully upload themselves and then figure out the logistics and the mathematics of a successful CEV process. It’s like deciding to fix global warming by building a city on the moon that will be devoted to the task of solving global warming.
The plan doesn’t require a truncated AIXI-like solver with lots of hardware. It’s a goal specification you can code directly into a self-improving AI that starts out with weak hardware. “Follow the utility function that program X would output if given enough time” doesn’t require the AI to run program X, only to reason about the likely outputs of program X.
Follow the utility function that program X would output if given enough time” doesn’t require the AI to run program X, only to reason about the likely outputs of program X.
It doesn’t in principle require this, but might in practice, in which case the AI might eat the universe if that’s the amount of computational resources necessary to compute the results of running program X. That is a potential downside of this plan.
Well on the dark, sardonic upside, it might find it convenient to eat the people in the process of using their minds to compute a CEV-function. Infinite varieties of infinite hell-eternities for everyone!
I disagree; reading Paul’s description made it clear to me how superficial it is to want to solve a problem by creating an army of uploads to do it for you. You may as well just try to solve the problem here and now, rather than hoping to outsource it to a bunch of nonexistent human-simulations running on nonexistent hardware. The only reason to consider such a baroque way of solving a problem is if you expect to be very pressed for time and yet to also have access to superdupercomputing power. You know, the world is hurtling towards singularity, no-one has crossed the finish line but many people are getting close, your FAI research organization manages to get a hold of a few petaflops on which to run a truncated AIXI problem-solver… and now you can finally go dig up that scrap of paper on which your team wrote down, years before, the perfectly optimal wish: “I want you, FAI-precursor, to do what the ethically stabilized members of our team would do, if they had hundreds of years to think about it, and if they...”, etcetera.
It’s a logically possible scenario, but is it remotely likely? This absolutely should not be the paradigm for a successful implementation of FAI or CEV. It’s just a wacky contingency that you might want to spend a little time thinking about. The plan should be that un-uploaded people will figure out what to do. They will surely make intensive use of computers, and there may be some big final calculation in which the schematics of human genetic, neural and cultural architecture are the inputs to a reflective optimization process; but you shouldn’t imagine that, like some bunch of Greg Egan characters, the researchers are going to successfully upload themselves and then figure out the logistics and the mathematics of a successful CEV process. It’s like deciding to fix global warming by building a city on the moon that will be devoted to the task of solving global warming.
The plan doesn’t require a truncated AIXI-like solver with lots of hardware. It’s a goal specification you can code directly into a self-improving AI that starts out with weak hardware. “Follow the utility function that program X would output if given enough time” doesn’t require the AI to run program X, only to reason about the likely outputs of program X.
It doesn’t in principle require this, but might in practice, in which case the AI might eat the universe if that’s the amount of computational resources necessary to compute the results of running program X. That is a potential downside of this plan.
Well on the dark, sardonic upside, it might find it convenient to eat the people in the process of using their minds to compute a CEV-function. Infinite varieties of infinite hell-eternities for everyone!
Could you express your objection more precisely than “it’s wacky”?