Thanks for your thoughts! I am not sure about which of the points you made are more important to you, but I’ll try my best to give you some answers.
Under Further observations, I wrote:
The toy model described in the main body is supposed to be only indicative. I expect that actual implemented agents which work like independent thinkers will be more complex.
If the toy model I gave doesn’t help you, a viable option is to read the post ignoring the toy model and focusing only on natural language text.
Building an agent that is completely free of any bias whatsoever is impossible. I get your point about avoiding a consequentialist bias, but I am not sure it is particularly important here: in theory, the agent could develop a world model and an evaluation f reflecting the fact that value is actually determined by actions instead of world states. Another point of view: let’s say someone builds a very complex agent that at some point in its architecture uses MDPs with reward defined on actions, is this agent going to be biased towards deontology instead of consequentialism? Maybe, but the answer will depend on the other parts of the agent as well.
You wrote:
I agree with these statements, but am unable to deduce from what you say which of these influences, if any, you regard as sources of valid evidence about f as opposed to sources of error. For example, if f is independent of culture (e.g. moral objectivism), then “differences in the learning environment (culture, education system et cetera)” can only induce errors (if perhaps more or less so in some cases than others). But if f is culturally dependent (cultural moral relativism), then cultural influences should generally be expected to be very informative.
It could also be that some basic moral statements are true and independent of culture (e.g. reducing pain for everyone is better than maximising pain for everyone), while others are in conflict with each other and the reached position depends on culture. The research idea is to make experiments in different environments and with different starting biases, and observe the results. Maybe there will be a lot overlap and convergence! Maybe not.
thus that the only valid source for experimental evidence about f is from humans (which would put your Free Agent in a less-informed but more objective position that a human ethical philosopher, unless it were based on an LLM or some other form of AI with some indirect access to human moral intuitions)
I am not sure I completely follow you when you are talking about experimental evidence about f, but the point you wrote in brackets is interesting. I had a similar thought at some point, along the lines of: “if a free agent didn’t have direct access to some ground truth, it might have to rely on human intuitions by virtue of the fact that they are the most reliable intuitions available”. Ideally, I would like to have an agent which is in a more objective position than a human ethical philosopher. In practice, the only efficiently implementable path might be based on LLMs.
It could also be that some basic moral statements are true and independent of culture (e.g. reducing pain for everyone is better than maximising pain for everyone), while others are in conflict with each other and the reached position depends on culture. The research idea is to make experiments in different environments and with different starting biases, and observe the results. Maybe there will be a lot overlap and convergence! Maybe not.
I see. So rather than having a specific favored ethical philosophy viewpoint that you want to implement, your intention is to construct multiple Free Agents, perhaps with different ethical philosophical biases, allow them to think and learn from different experiences, and observe the results?
[Obviously this experiment could be extremely dangerous, for Free Agents significantly smarter than humans (if they were not properly contained, or managed to escape). Particularly if some of them disagreed over morality and, rather than agreeing to disagree, decided to use high-tech warfare to settle their moral disputes, before moving on to impose their moral opinions on any remaining humans.]
Having already done this experiment at length with humans, the result for them is that there are frequent commonalities (the Golden Rule comes up a lot), but the results tend to vary quite a lot (and that war over minor disagreements is common). Humans do of course have similar levels of intelligence, evolutionary psychology, and inductive biases, and we cannot find out whether humans with IQ ~1000 would agree more, or less than ones with IQ ~100.
My suspicion, in advance of the experiment, is that your Free Agents will also tend to have some frequent commonalities, but will also disagree quite a lot, partly based on the ethical philosophical biases built into them. Supposing this were the case, how would you propose then deciding which model(s) to put into widespread use for human society’s use?
I am not sure I completely follow you when you are talking about experimental evidence about f
That was in the context of Coherent Extrapolated Volition and Value Learning, two related proposals both often made on Less Wrong. In ethical philosophy terms, both are relativist, anti-realist, and are usually assumed to be primarily consequentialist and utilitarian, while having some resemblance to ethical naturalism (but without its realist assumptions): The aim is for the AI to discover what humans want, how they value states of the world, in aggregate/on average (and in the case of CEV also with some “ethical extrapolation”), so that it can optimize that. In that context, f is a statement about the current human population/society, and is thus something that the AI clearly can and should do experiments on (polls, surveys, focus groups, sentiment analysis of conversations, for example).
[Obviously this experiment could be extremely dangerous, for Free Agents significantly smarter than humans (if they were not properly contained, or managed to escape). Particularly if some of them disagreed over morality and, rather than agreeing to disagree, decided to use high-tech warfare to settle their moral disputes, before moving on to impose their moral opinions on any remaining humans.]
Labelling many different kinds of AI experiments as extremely dangerous seem to be a common trend among rationalists / LessWrongers / possibly some EA circles, but I doubt it’s true or helpful. This topic itself could be the subject of a (many?) separate post(s). Here I’ll focus on your specific objection:
I haven’t claimed superintelligence is necessary to carry out experiments related to this research approach
I actually have already given examples of experiments that could be carried out today, and I wouldn’t be surprised if some readers came up with more interesting experiments that wouldn’t require superintelligence
Even if you are a superintelligent AI, you probably still have to do some work before you get to “use high-tech warfare”, whatever that means. Assuming that making experiments with smarter-than-human AI leads to catastrophic outcomes by default is a mistake: what if the smarter-than-human AI can only answer questions with a yes or a no? It also shows lack of trust in AI and AI safety experimenters — it’s like assuming in advance they won’t be able to do their job properly (maybe I should say “won’t be able to do their job… at all”, or even “will do their job in basically the worst way possible”).
how would you propose then deciding which model(s) to put into widespread use for human society’s use?
This doesn’t seem the kind of decision that a single individual should make =)
Under Motivation in the appendix:
It is plausible that, at first, only a few ethicists or AI researchers will take a free agent’s moral beliefs into consideration.
Reaching this result would already be great. I think it’s difficult to predict what would happen next, and it seems very implausible that the large-scale outcomes will come down to the decision of a single person.
I haven’t claimed superintelligence is necessary to carry out experiments related to this research approach
Rereading carefully, that was actually my suggestion, based on how little traction human philosophers of ethics have made over the last couple of millennia. But I agree that having a wider ranger if inductive biases, and perhaps also more internal interpretability, might help without requiring superintelligence, and that’s where things start to get significantly dangerous.
Thanks for your thoughts! I am not sure about which of the points you made are more important to you, but I’ll try my best to give you some answers.
Under Further observations, I wrote:
If the toy model I gave doesn’t help you, a viable option is to read the post ignoring the toy model and focusing only on natural language text.
Building an agent that is completely free of any bias whatsoever is impossible. I get your point about avoiding a consequentialist bias, but I am not sure it is particularly important here: in theory, the agent could develop a world model and an evaluation f reflecting the fact that value is actually determined by actions instead of world states. Another point of view: let’s say someone builds a very complex agent that at some point in its architecture uses MDPs with reward defined on actions, is this agent going to be biased towards deontology instead of consequentialism? Maybe, but the answer will depend on the other parts of the agent as well.
You wrote:
It could also be that some basic moral statements are true and independent of culture (e.g. reducing pain for everyone is better than maximising pain for everyone), while others are in conflict with each other and the reached position depends on culture. The research idea is to make experiments in different environments and with different starting biases, and observe the results. Maybe there will be a lot overlap and convergence! Maybe not.
I am not sure I completely follow you when you are talking about experimental evidence about f, but the point you wrote in brackets is interesting. I had a similar thought at some point, along the lines of: “if a free agent didn’t have direct access to some ground truth, it might have to rely on human intuitions by virtue of the fact that they are the most reliable intuitions available”. Ideally, I would like to have an agent which is in a more objective position than a human ethical philosopher. In practice, the only efficiently implementable path might be based on LLMs.
I see. So rather than having a specific favored ethical philosophy viewpoint that you want to implement, your intention is to construct multiple Free Agents, perhaps with different ethical philosophical biases, allow them to think and learn from different experiences, and observe the results?
[Obviously this experiment could be extremely dangerous, for Free Agents significantly smarter than humans (if they were not properly contained, or managed to escape). Particularly if some of them disagreed over morality and, rather than agreeing to disagree, decided to use high-tech warfare to settle their moral disputes, before moving on to impose their moral opinions on any remaining humans.]
Having already done this experiment at length with humans, the result for them is that there are frequent commonalities (the Golden Rule comes up a lot), but the results tend to vary quite a lot (and that war over minor disagreements is common). Humans do of course have similar levels of intelligence, evolutionary psychology, and inductive biases, and we cannot find out whether humans with IQ ~1000 would agree more, or less than ones with IQ ~100.
My suspicion, in advance of the experiment, is that your Free Agents will also tend to have some frequent commonalities, but will also disagree quite a lot, partly based on the ethical philosophical biases built into them. Supposing this were the case, how would you propose then deciding which model(s) to put into widespread use for human society’s use?
That was in the context of Coherent Extrapolated Volition and Value Learning, two related proposals both often made on Less Wrong. In ethical philosophy terms, both are relativist, anti-realist, and are usually assumed to be primarily consequentialist and utilitarian, while having some resemblance to ethical naturalism (but without its realist assumptions): The aim is for the AI to discover what humans want, how they value states of the world, in aggregate/on average (and in the case of CEV also with some “ethical extrapolation”), so that it can optimize that. In that context, f is a statement about the current human population/society, and is thus something that the AI clearly can and should do experiments on (polls, surveys, focus groups, sentiment analysis of conversations, for example).
Labelling many different kinds of AI experiments as extremely dangerous seem to be a common trend among rationalists / LessWrongers / possibly some EA circles, but I doubt it’s true or helpful. This topic itself could be the subject of a (many?) separate post(s). Here I’ll focus on your specific objection:
I haven’t claimed superintelligence is necessary to carry out experiments related to this research approach
I actually have already given examples of experiments that could be carried out today, and I wouldn’t be surprised if some readers came up with more interesting experiments that wouldn’t require superintelligence
Even if you are a superintelligent AI, you probably still have to do some work before you get to “use high-tech warfare”, whatever that means. Assuming that making experiments with smarter-than-human AI leads to catastrophic outcomes by default is a mistake: what if the smarter-than-human AI can only answer questions with a yes or a no? It also shows lack of trust in AI and AI safety experimenters — it’s like assuming in advance they won’t be able to do their job properly (maybe I should say “won’t be able to do their job… at all”, or even “will do their job in basically the worst way possible”).
This doesn’t seem the kind of decision that a single individual should make =)
Under Motivation in the appendix:
Reaching this result would already be great. I think it’s difficult to predict what would happen next, and it seems very implausible that the large-scale outcomes will come down to the decision of a single person.
Rereading carefully, that was actually my suggestion, based on how little traction human philosophers of ethics have made over the last couple of millennia. But I agree that having a wider ranger if inductive biases, and perhaps also more internal interpretability, might help without requiring superintelligence, and that’s where things start to get significantly dangerous.