I think that understanding what we value is very important. I’m not convinced that developing a technical understanding of what we value is the most important thing right now.
I don’t believe that the best thing for me to do is to study human values. I also don’t believe that at the margin, funding researchers who study human values is the best use of money.
Of course, my thinking on these matters is subject to change with incoming information. But if what I think you’re saying is true, I’d need to see a more detailed argument than the one that you’ve offered so far to be convinced.
If you’d like to correspond by email about these things, I’d be happy to say more about my thinking about these things. Feel free to PM me with your email address.
I didn’t ask about perceived importance (that has already taken feasibility into account), I asked about your belief that it’s not a productive enterprise (that is the feasibility component of importance, considered alone), that we are not ready to efficiently work on the problem yet.
If you believe that we are not ready now, but believe that we must work on the problem eventually, you need to have a notion of what conditions are necessary to conclude that it’s productive to work on the problem under those conditions.
And that’s my question: what are those conditions, or how can one figure them out without actually attempting to study the problem (by a proxy of a small team devoted to professionally studying the problem; I’m not yet arguing to start a program on the scale of what’s expended on study of string theory).
I think that research of the type that you describe is productive. Unless I’ve erred, my statements above are statements about the relative efficacy of funding research of the type that you describe rather than suggestions that research of the type that you describe has no value.
I personally still feel the way that I did in June despite having read Fake Fake Utility Functions, etc. I don’t think that it’s very likely the case that we will eventually have to do research of the type that you describe to ensure an ideal outcome. Relatedly, I believe that at the margin, at the moment funding other projects has higher expected value than funding research of the type that you describe. But I may be wrong and don’t have an argument against your position. I think that this is something that reasonable people can disagree on. I have no problem with you funding, engaging in and advocating research of the type that you describe.
You and I may have a difference which cannot be rationally resolved in a timely fashion on account of the information that we have access to being in a forms that makes it difficult or impossible to share. Having different people fund different projects according to their differing beliefs about the world serves as some sort of real world approximation to funding what should be funded according to the result of Bayesian averaging over all people and then funding what should be funded based on that.
So, anyway, I think you’ve given satisfactory answers to how you feel about questions (a) and (b) raised in my comment. I remain curious how you feel about point (c).
I did answer to (c) before: any reasonable effort in that direction should start with trying to get SIAI itself to change or justify the way it behaves.
Yes, I agree with you. I didn’t remember that you had answered this question before. Incidentally, I did correspond with Michael Vassar. More on this to follow later.
I think that understanding what we value is very important. I’m not convinced that developing a technical understanding of what we value is the most important thing right now.
I imagine that for some people, working on a developing a technical understanding understanding what we value is the best thing that they could be doing. Different people have different strengths, and this leads to the utilitarian thing varying from person to person..
I don’t believe that the best thing for me to do is to study human values. I also don’t believe that at the margin, funding researchers who study human values is the best use of money.
Of course, my thinking on these matters is subject to change with incoming information. But if what I think you’re saying is true, I’d need to see a more detailed argument than the one that you’ve offered so far to be convinced.
If you’d like to correspond by email about these things, I’d be happy to say more about my thinking about these things. Feel free to PM me with your email address.
I didn’t ask about perceived importance (that has already taken feasibility into account), I asked about your belief that it’s not a productive enterprise (that is the feasibility component of importance, considered alone), that we are not ready to efficiently work on the problem yet.
If you believe that we are not ready now, but believe that we must work on the problem eventually, you need to have a notion of what conditions are necessary to conclude that it’s productive to work on the problem under those conditions.
And that’s my question: what are those conditions, or how can one figure them out without actually attempting to study the problem (by a proxy of a small team devoted to professionally studying the problem; I’m not yet arguing to start a program on the scale of what’s expended on study of string theory).
I think that research of the type that you describe is productive. Unless I’ve erred, my statements above are statements about the relative efficacy of funding research of the type that you describe rather than suggestions that research of the type that you describe has no value.
I personally still feel the way that I did in June despite having read Fake Fake Utility Functions, etc. I don’t think that it’s very likely the case that we will eventually have to do research of the type that you describe to ensure an ideal outcome. Relatedly, I believe that at the margin, at the moment funding other projects has higher expected value than funding research of the type that you describe. But I may be wrong and don’t have an argument against your position. I think that this is something that reasonable people can disagree on. I have no problem with you funding, engaging in and advocating research of the type that you describe.
You and I may have a difference which cannot be rationally resolved in a timely fashion on account of the information that we have access to being in a forms that makes it difficult or impossible to share. Having different people fund different projects according to their differing beliefs about the world serves as some sort of real world approximation to funding what should be funded according to the result of Bayesian averaging over all people and then funding what should be funded based on that.
So, anyway, I think you’ve given satisfactory answers to how you feel about questions (a) and (b) raised in my comment. I remain curious how you feel about point (c).
I did answer to (c) before: any reasonable effort in that direction should start with trying to get SIAI itself to change or justify the way it behaves.
Yes, I agree with you. I didn’t remember that you had answered this question before. Incidentally, I did correspond with Michael Vassar. More on this to follow later.