This is a reason it would be extremely difficult. Yet I feel the remaining existential risk should outweigh that.
It seems to me reasonably likely that our first version of FAI would go wrong. Human values are extremely difficult to understand because they’re spaghetti mush, and they often contradict each other and interact in bizarre ways. Reconciling that in a self consistent and logical fashion would be very difficult to do. Coding a program to do that would be even harder. We don’t really seem to have made any real progress on FAI thus far, so I think this level of skepticism is warranted.
I’m proposing multiple alternative tracks to safer AI, which should probably be used in conjunction with the best FAI we can manage. Some of these tracks are expensive, and difficult, but others seem simpler. The interactions between the different tracks produces a sort of safety net where the successes of one check the failures of others, as I’ve had to show throughout this conversation again and again.
I’m willing to spend much more to keep the planet safe against a much lower level of existential risk than anyone else here, I think. That’s the only reason I can think to explain why everyone keeps responding with objections that essentially boil down to “this would be difficult and expensive”. But the entire idea of AI is expensive, as well as FAI, yet the costs are accepted easily in those cases. I don’t know why we shouldn’t just add another difficult project to our long list of difficult projects to tackle, given the stakes that we’re dealing with.
Most people on this site seem only to consider AI as a project to be completed in the next fifty or so years. I see it more as the most difficult task that’s ever been attempted in all humankind. I think it will take at least 200 hundred years, even factoring in the idea that new technologies I can’t even imagine will be developed over that time. I think the most common perspective on the way we should approach AI is thus flawed, and rushed, compared to the stakes, which are millions of generations of human decendents. We’re approaching a problem that effects millions of future generations, and trying to fix it in half a generation with as cheap a budget as we think we can justify, and that seems like a really bad idea (possibly the worst idea ever) to me.
This is a reason it would be extremely difficult. Yet I feel the remaining existential risk should outweigh that.
It seems to me reasonably likely that our first version of FAI would go wrong. Human values are extremely difficult to understand because they’re spaghetti mush, and they often contradict each other and interact in bizarre ways. Reconciling that in a self consistent and logical fashion would be very difficult to do. Coding a program to do that would be even harder. We don’t really seem to have made any real progress on FAI thus far, so I think this level of skepticism is warranted.
I’m proposing multiple alternative tracks to safer AI, which should probably be used in conjunction with the best FAI we can manage. Some of these tracks are expensive, and difficult, but others seem simpler. The interactions between the different tracks produces a sort of safety net where the successes of one check the failures of others, as I’ve had to show throughout this conversation again and again.
I’m willing to spend much more to keep the planet safe against a much lower level of existential risk than anyone else here, I think. That’s the only reason I can think to explain why everyone keeps responding with objections that essentially boil down to “this would be difficult and expensive”. But the entire idea of AI is expensive, as well as FAI, yet the costs are accepted easily in those cases. I don’t know why we shouldn’t just add another difficult project to our long list of difficult projects to tackle, given the stakes that we’re dealing with.
Most people on this site seem only to consider AI as a project to be completed in the next fifty or so years. I see it more as the most difficult task that’s ever been attempted in all humankind. I think it will take at least 200 hundred years, even factoring in the idea that new technologies I can’t even imagine will be developed over that time. I think the most common perspective on the way we should approach AI is thus flawed, and rushed, compared to the stakes, which are millions of generations of human decendents. We’re approaching a problem that effects millions of future generations, and trying to fix it in half a generation with as cheap a budget as we think we can justify, and that seems like a really bad idea (possibly the worst idea ever) to me.