It sure seems like if he really grokked the philosophical and technical challenge of getting a GAI agent to be net beneficial, he would write a different paper. That first challenge sort of overshadows the task of dividing up the post-singularity pie.
But I’m not sure whether the overshadowing is merely by being bigger (in which case this paper is still doing useful work), or if we should expect that solutions to the pie-dividing problems (e.g. weighing egalitarianism vs. utilitarianism) will necessarily fall out of the process that lets the AI learn how to behave well.
If you buy a pizza cutter, but the pizza doesn’t arrive, then you’ve wasted your money.
(Technically this is incorrect if you ever buy a pizza again, or there’s something else you can use it to split, but I understand the main reason people have expressed concern about AGI is the belief that if it goes horribly wrong there won’t be another chance to try again.)
It sure seems like if he really grokked the philosophical and technical challenge of getting a GAI agent to be net beneficial, he would write a different paper. That first challenge sort of overshadows the task of dividing up the post-singularity pie.
But I’m not sure whether the overshadowing is merely by being bigger (in which case this paper is still doing useful work), or if we should expect that solutions to the pie-dividing problems (e.g. weighing egalitarianism vs. utilitarianism) will necessarily fall out of the process that lets the AI learn how to behave well.
If you buy a pizza cutter, but the pizza doesn’t arrive, then you’ve wasted your money.
(Technically this is incorrect if you ever buy a pizza again, or there’s something else you can use it to split, but I understand the main reason people have expressed concern about AGI is the belief that if it goes horribly wrong there won’t be another chance to try again.)