I’m not sure I like the capabilities vs alignment frame. My view is: Alignment will be achieved through the right set of capabilities. A big part of FAI is figuring out which capabilities needed for FAI aren’t needed for UFAI. If you could answer that question, then you’ve reduced FAI to an AI problem. And in order to answer it, you’re going to want to spend a lot of time thinking about AI capabilities. Nick Bostrom’s book is a heroic effort by someone who is not an AI capabilities expert to try to work on FAI, and it has a lot of interesting ideas. But as I learn more about AI, I start to see flaws in his thinking.
I’m not sure why OpenAI is working on deep reinforcement learning, though. Yes, it’s trendy. But expert systems were also trendy once. If our “safe AI” groups just work on whatever is trendy at the moment, how are they different than non-”safe AI” groups? I’m currently hoping deep reinforcement learning doesn’t go anywhere.
Maybe the meta question should be: what’s the best way to influence which research areas are trendy? I think this talk was a positive development. I’m told there are now a decent number of researchers working on the “simple theorems, simple experiments” approach Rahimi advocates.
It also wouldn’t surprise me if this is also a faster way to make progress in the long run. Ultimately, scientists reached transmutation before alchemists did. It’s not entirely clear to me whether Rahimi style insight should be considered blessed “alignment” research or cursed “capabilities” research, but I’m leaning towards optimism.
If people want to do things that are unambiguously helpful, here’s a different frame. Suppose we model the quality of an AI system as the product of the insight of the researchers, the amount of data they’ve got, and the amount of hardware they’ve got.
It’s not totally clear to me when “insight” is positive or negative. But it seems likely to me that restricting hardware, so researchers are forced to use more insight instead of brute forcing things with black boxes, would be valuable. (Intuition: if the quality of a system is held constant, we would prefer for quality to be achieved because researchers have deep insight into the problem they’re solving.) So maybe it’d be good to push for a global tax on GPUs or something. If a political movement forms around technological employment, they could agitate for this. Hardware is easier to regulate than software in any case.
Data is a bit more complex, because having a lot of training data for a task makes it easier to develop an AI to perform that task. So that suggests increasing the amount of data available for training AIs on ethics-related tasks, and decreasing the amount of data available for training AIs on other tasks. Not totally sure what this would look like.
I’m not sure I like the capabilities vs alignment frame. My view is: Alignment will be achieved through the right set of capabilities. A big part of FAI is figuring out which capabilities needed for FAI aren’t needed for UFAI. If you could answer that question, then you’ve reduced FAI to an AI problem. And in order to answer it, you’re going to want to spend a lot of time thinking about AI capabilities. Nick Bostrom’s book is a heroic effort by someone who is not an AI capabilities expert to try to work on FAI, and it has a lot of interesting ideas. But as I learn more about AI, I start to see flaws in his thinking.
I’m not sure why OpenAI is working on deep reinforcement learning, though. Yes, it’s trendy. But expert systems were also trendy once. If our “safe AI” groups just work on whatever is trendy at the moment, how are they different than non-”safe AI” groups? I’m currently hoping deep reinforcement learning doesn’t go anywhere.
Maybe the meta question should be: what’s the best way to influence which research areas are trendy? I think this talk was a positive development. I’m told there are now a decent number of researchers working on the “simple theorems, simple experiments” approach Rahimi advocates.
It also wouldn’t surprise me if this is also a faster way to make progress in the long run. Ultimately, scientists reached transmutation before alchemists did. It’s not entirely clear to me whether Rahimi style insight should be considered blessed “alignment” research or cursed “capabilities” research, but I’m leaning towards optimism.
If people want to do things that are unambiguously helpful, here’s a different frame. Suppose we model the quality of an AI system as the product of the insight of the researchers, the amount of data they’ve got, and the amount of hardware they’ve got.
It’s not totally clear to me when “insight” is positive or negative. But it seems likely to me that restricting hardware, so researchers are forced to use more insight instead of brute forcing things with black boxes, would be valuable. (Intuition: if the quality of a system is held constant, we would prefer for quality to be achieved because researchers have deep insight into the problem they’re solving.) So maybe it’d be good to push for a global tax on GPUs or something. If a political movement forms around technological employment, they could agitate for this. Hardware is easier to regulate than software in any case.
Data is a bit more complex, because having a lot of training data for a task makes it easier to develop an AI to perform that task. So that suggests increasing the amount of data available for training AIs on ethics-related tasks, and decreasing the amount of data available for training AIs on other tasks. Not totally sure what this would look like.