Yes, by calling this site a “community of philosophers”, I roughly mean that at the level of the entire community, nobody can agree that progress is being made. There is no mechanism for creating a community-wide agreement that a problem has been solved.
You give three specific examples of progress above. From his recent writings, it is clear that Yudkowsky does not believe, like you do, that any contributions posted on this site in the last few years have made any meaningful progress towards solving alignment. You and I may agree that some or all of the above three examples represent some form of progress, but you and I are not the entire community here, Yudkowsky is also part of it.
On the last one of your three examples, I feel that ‘mesa optimizers’ is another regrettable example of the forces of linguistic entropy overwhelming any attempts at developing crisply stated definitions which are then accepted and leveraged by the entire community. It is not like the people posting on this site are incapable of using the tools needed to crisply define things, the problem is that many do not seem very interested in ever using other people’s definitions or models as a frame of reference. They’d rather free-associate on the term, and then develop their own strongly held beliefs of what it is all supposed to be about.
I am sensing from your comments that you believe that, with more hard work and further progress on understanding alignment, it will in theory be possible to make this community agree, in future, that certain alignment problems have been solved. I, on the other hand, do not believe that it is possible to ever reach that state of agreement in this community, because the debating rules of philosophy apply here.
Philosophers are always allowed to disagree based on strongly held intuitive beliefs that they cannot be expected to explain any further. The type of agreement you seek is only possible in a sub-community which is willing to use more strict rules of debate.
This has implications for policy-related alignment work. If you want to make a policy proposal that has a chance of being accepted, it is generally required that you can point to some community of subject matter experts who agree on the coherence and effectiveness of your proposal. LW/AF cannot serve as such a community of experts.
On the last one of your three examples, I feel that ‘mesa optimizers’ is another regrettable example of the forces of linguistic entropy overwhelming any attempts at developing crisply stated definitions which are then accepted and leveraged by the entire community. It is not like the people posting on this site are incapable of using the tools needed to crisply define things, the problem is that many do not seem very interested in ever using other people’s definitions or models as a frame of reference. They’d rather free-associate on the term, and then develop their own strongly held beliefs of what it is all supposed to be about
Yes.. clarity isn’t optional.
MIRI abandonned the idea of producing technology a long time ago , so what it will offer to the the people who are working on AI technology is some kind of theory expressed by n some kind of document ..which will be of no use to them if they can’t understand it.
And it takes a constant parallel effort to keep the lines of communication open. It’s no use “woodshedding” , spending a lot of time developing your own ideas in your own language.
Yes, by calling this site a “community of philosophers”, I roughly mean that at the level of the entire community, nobody can agree that progress is being made. There is no mechanism for creating a community-wide agreement that a problem has been solved.
You give three specific examples of progress above. From his recent writings, it is clear that Yudkowsky does not believe, like you do, that any contributions posted on this site in the last few years have made any meaningful progress towards solving alignment. You and I may agree that some or all of the above three examples represent some form of progress, but you and I are not the entire community here, Yudkowsky is also part of it.
On the last one of your three examples, I feel that ‘mesa optimizers’ is another regrettable example of the forces of linguistic entropy overwhelming any attempts at developing crisply stated definitions which are then accepted and leveraged by the entire community. It is not like the people posting on this site are incapable of using the tools needed to crisply define things, the problem is that many do not seem very interested in ever using other people’s definitions or models as a frame of reference. They’d rather free-associate on the term, and then develop their own strongly held beliefs of what it is all supposed to be about.
I am sensing from your comments that you believe that, with more hard work and further progress on understanding alignment, it will in theory be possible to make this community agree, in future, that certain alignment problems have been solved. I, on the other hand, do not believe that it is possible to ever reach that state of agreement in this community, because the debating rules of philosophy apply here.
Philosophers are always allowed to disagree based on strongly held intuitive beliefs that they cannot be expected to explain any further. The type of agreement you seek is only possible in a sub-community which is willing to use more strict rules of debate.
This has implications for policy-related alignment work. If you want to make a policy proposal that has a chance of being accepted, it is generally required that you can point to some community of subject matter experts who agree on the coherence and effectiveness of your proposal. LW/AF cannot serve as such a community of experts.
Yes.. clarity isn’t optional.
MIRI abandonned the idea of producing technology a long time ago , so what it will offer to the the people who are working on AI technology is some kind of theory expressed by n some kind of document ..which will be of no use to them if they can’t understand it.
And it takes a constant parallel effort to keep the lines of communication open. It’s no use “woodshedding” , spending a lot of time developing your own ideas in your own language.