If the AI is can design you a Friendly AI, it is necessarily able to model you well enough to predict what you will do once given the design or insights it intends to give you (whether those are AI designs or a cancer cure is irrelevant). Therefore, it will give you the specific design or insights that predictably lead to you to fulfill its utility function, which is highly dangerous if it is Unfriendly. By taking any information from the boxed AI, you have put yourself under the sight of a hostile Omega.
assuming that you can prove the AI won’t manipulate the output
Since the AI is creating the output, you cannot possibly assume this.
or that you can trust that nothing bad can come from merely reading it and absorbing the information
This assumption is equivalent to Friendliness.
For instance it might be possible to create an AI whose goal is to maximize the value of it’s output, and therefore would have no incentive to put trojan horses or anything into it.
You haven’t thought through what that means. “maximize the value of it’s output” by what standard? Does it have an internal measure? Then that’s just an arbitrary utility function, and you have gained nothing. Does it use the external creator’s measure? Then it has a strong incentive to modify you to value things it can produce easily. (i.e. iron atoms)
You are making a lot of very strong assumptions that I don’t agree with. Like it being able to control people just by talking to them.
But even if it could, it doesn’t make it dangerous. Perhaps the AI has no long term goals and so doesn’t care about escaping the box. Or perhaps it’s goal is internal, like coming up with a design for something that can be verified by a simulator. E.g. asking for a solution to a math problem or a factoring algorithm, etc.
A prerequisite for planning a Friendly AI is understanding individual and collective human values well enough to predict whether they would be satisfied with the outcome, which entails (in the logical sense) having a very well-developed model of the specific humans you interact with, or at least the capability to construct one if you so choose. Having a sufficiently well-developed model to predict what you will do given the data you are given is logically equivalent to a weak form of “control people just by talking to them”.
To put that in perspective, if I understood the people around me well enough to predict what they would do given what I said to them, I would never say things that caused them to take actions I wouldn’t like; if I, for some reason, valued them becoming terrorists, it would be a slow and gradual process to warp their perceptions in the necessary ways to drive them to terrorism, but it could be done through pure conversation over the course of years, and faster if they were relying on me to provide them large amounts of data they were using to make decisions.
And even the potential to construct this weak form of control that is initially heavily constrained in what outcomes are reachable and can only be expanded slowly is incredibly dangerous to give to an Unfriendly AI. If it is Unfriendly, it will want different things than its creators and will necessarily get value out of modeling them. And regardless of its values, if more computing power is useful in achieving its goals (an ‘if’ that is true for all goals), escaping the box is instrumentally useful.
And the idea of a mind with “no long term goals” is absurd on its face. Just because you don’t know the long-term goals doesn’t mean they don’t exist.
A prerequisite for planning a Friendly AI is understanding individual and collective human values well enough to predict whether they would be satisfied with the outcome, which entails (in the logical sense) having a very well-developed model of the specific humans you interact with, or at least the capability to construct one if you so choose. Having a sufficiently well-developed model to predict what you will do given the data you are given is logically equivalent to a weak form of “control people just by talking to them”.
By that reasoning, there’s no such thing as a Friendly human. I suggest that most people when talking about friendly AIs do not mean to imply a standard of friendliness so strict that humans could not meet it.
Yeah, what Vauroch said. Humans aren’t close to Friendly. To the extent that people talk about “friendly AIs” meaning AIs that behave towards humans the way humans do, they’re misunderstanding how the term is used here. (Which is very likely; it’s often a mistake to use a common English word as specialized jargon, for precisely this reason.)
Relatedly, there isn’t a human such that I would reliably want to live in a future where that human obtains extreme superhuman power. (It might turn out OK, or at least better than the present, but I wouldn’t bet on it.)
Relatedly, there isn’t a human such that I would reliably want to live in a future where that human obtains extreme superhuman power. (It might turn out OK, or at least better than the present, but I wouldn’t bet on it.)
Just be careful to note that there isn’t a binary choice relationship here. There are also possibilities where institutions (multiple individuals in a governing body with checks and balances) are pushed into positions of extreme superhuman power. There’s also the possibility of pushing everybody who desires to be enhanced through levels of greater intelligence in lock step so as to prevent a single human or groups of humans achieving asymmetric power.
Sure. I think my initial claim holds for all currently existing institutions as well as all currently existing individuals, as well as for all simple aggregations of currently existing humans, but I certainly agree that there’s a huge universe of possibilities. In particular, there are futures in which augmented humans have our own mechanisms for engaging with and realizing our values altered to be more reliable and/or collaborative, and some of those futures might be ones I reliably want to live in.
Perhaps what I ought to have said is that there isn’t a currently existing human with that property.
By that reasoning, there’s no such thing as a Friendly human.
True. There isn’t.
I suggest that most people when talking about friendly AIs do not mean to imply a standard of friendliness so strict that humans could not meet it.
Well, I definitely do, and I’m at least 90% confident Eliezer does as well. Most, probably nearly all, of people who talk about Friendliness would regard a FOOMed human as Unfriendly.
Having an accurate model of something is in no way equivalent to letting you do anything you want. If I know everything about physics, I still can’t walk through walls. A boxed AI won’t be able to magically make it’s creators forget about AI risks and unbox it.
There are other possible set ups, like feeding it’s output to another AI who’s goal is to find any flaws or attempts at manipulation in it, and so on. Various other ideas might help, like threatening to severely punish attempts at manipulation.
This is of course only necessary for the AI who can interact with us at such a level, the other ideas were far more constrained, e.g. restricting it to solving math or engineering problems.
Nor is it necessary to let it be superintelligent, instead of limiting it to something comparable to high IQ humans.
And the idea of a mind with “no long term goals” is absurd on its face. Just because you don’t know the long-term goals doesn’t mean they don’t exist.
Another super strong assumption with no justification at all. It’s trivial to propose an AI model which only cares about finite time horizons. Predict what actions will have the highest expected utility at time T, take that action.
A boxed AI won’t be able to magically make it’s creators forget about AI risks and unbox it.
The results of AI box game trials disagree.
t’s trivial to propose an AI model which only cares about finite time horizons. Predict what actions will have the highest expected utility at time T, take that action.
And what does it do at time T+1? And if you said ‘nothing’, try again, because you have no way of justifying that claim. It may not have intentionally-designed long-term preferences, but just because your eyes are closed does not mean the room is empty.
That doesn’t prove anything, no one has even seen logs. Based on reading what people involved have said about it, I strongly suspect the trick is for the AI to emotionally abuse the gatekeeper until they don’t want to play anymore (which counts as letting the AI out.)
This doesn’t apply to the real world AI, since no one is forcing you to choose between letting the AI out, and listening to it for hours. You can just get up and leave. You can turn the AI off. There is no reason you even have to allow interactivity in the first place.
But Yudkowsky and others claim these experiments demonstrate that human brains are “hackable”. That there is some sentence which, just by reading, will cause you to involuntarily perform any arbitrary action. And that a sufficiently powerful AI can discover it.
And what does it do at time T+1?
At time T+1, it does whatever it thinks will result in the greatest reward at time T+2, and so on. Or you could have it shut off or reset to a blank state.
If the AI is can design you a Friendly AI, it is necessarily able to model you well enough to predict what you will do once given the design or insights it intends to give you (whether those are AI designs or a cancer cure is irrelevant). Therefore, it will give you the specific design or insights that predictably lead to you to fulfill its utility function, which is highly dangerous if it is Unfriendly. By taking any information from the boxed AI, you have put yourself under the sight of a hostile Omega.
Since the AI is creating the output, you cannot possibly assume this.
This assumption is equivalent to Friendliness.
You haven’t thought through what that means. “maximize the value of it’s output” by what standard? Does it have an internal measure? Then that’s just an arbitrary utility function, and you have gained nothing. Does it use the external creator’s measure? Then it has a strong incentive to modify you to value things it can produce easily. (i.e. iron atoms)
You are making a lot of very strong assumptions that I don’t agree with. Like it being able to control people just by talking to them.
But even if it could, it doesn’t make it dangerous. Perhaps the AI has no long term goals and so doesn’t care about escaping the box. Or perhaps it’s goal is internal, like coming up with a design for something that can be verified by a simulator. E.g. asking for a solution to a math problem or a factoring algorithm, etc.
A prerequisite for planning a Friendly AI is understanding individual and collective human values well enough to predict whether they would be satisfied with the outcome, which entails (in the logical sense) having a very well-developed model of the specific humans you interact with, or at least the capability to construct one if you so choose. Having a sufficiently well-developed model to predict what you will do given the data you are given is logically equivalent to a weak form of “control people just by talking to them”.
To put that in perspective, if I understood the people around me well enough to predict what they would do given what I said to them, I would never say things that caused them to take actions I wouldn’t like; if I, for some reason, valued them becoming terrorists, it would be a slow and gradual process to warp their perceptions in the necessary ways to drive them to terrorism, but it could be done through pure conversation over the course of years, and faster if they were relying on me to provide them large amounts of data they were using to make decisions.
And even the potential to construct this weak form of control that is initially heavily constrained in what outcomes are reachable and can only be expanded slowly is incredibly dangerous to give to an Unfriendly AI. If it is Unfriendly, it will want different things than its creators and will necessarily get value out of modeling them. And regardless of its values, if more computing power is useful in achieving its goals (an ‘if’ that is true for all goals), escaping the box is instrumentally useful.
And the idea of a mind with “no long term goals” is absurd on its face. Just because you don’t know the long-term goals doesn’t mean they don’t exist.
By that reasoning, there’s no such thing as a Friendly human. I suggest that most people when talking about friendly AIs do not mean to imply a standard of friendliness so strict that humans could not meet it.
Yeah, what Vauroch said. Humans aren’t close to Friendly. To the extent that people talk about “friendly AIs” meaning AIs that behave towards humans the way humans do, they’re misunderstanding how the term is used here. (Which is very likely; it’s often a mistake to use a common English word as specialized jargon, for precisely this reason.)
Relatedly, there isn’t a human such that I would reliably want to live in a future where that human obtains extreme superhuman power. (It might turn out OK, or at least better than the present, but I wouldn’t bet on it.)
Just be careful to note that there isn’t a binary choice relationship here. There are also possibilities where institutions (multiple individuals in a governing body with checks and balances) are pushed into positions of extreme superhuman power. There’s also the possibility of pushing everybody who desires to be enhanced through levels of greater intelligence in lock step so as to prevent a single human or groups of humans achieving asymmetric power.
Sure. I think my initial claim holds for all currently existing institutions as well as all currently existing individuals, as well as for all simple aggregations of currently existing humans, but I certainly agree that there’s a huge universe of possibilities. In particular, there are futures in which augmented humans have our own mechanisms for engaging with and realizing our values altered to be more reliable and/or collaborative, and some of those futures might be ones I reliably want to live in.
Perhaps what I ought to have said is that there isn’t a currently existing human with that property.
True. There isn’t.
Well, I definitely do, and I’m at least 90% confident Eliezer does as well. Most, probably nearly all, of people who talk about Friendliness would regard a FOOMed human as Unfriendly.
Having an accurate model of something is in no way equivalent to letting you do anything you want. If I know everything about physics, I still can’t walk through walls. A boxed AI won’t be able to magically make it’s creators forget about AI risks and unbox it.
There are other possible set ups, like feeding it’s output to another AI who’s goal is to find any flaws or attempts at manipulation in it, and so on. Various other ideas might help, like threatening to severely punish attempts at manipulation.
This is of course only necessary for the AI who can interact with us at such a level, the other ideas were far more constrained, e.g. restricting it to solving math or engineering problems.
Nor is it necessary to let it be superintelligent, instead of limiting it to something comparable to high IQ humans.
Another super strong assumption with no justification at all. It’s trivial to propose an AI model which only cares about finite time horizons. Predict what actions will have the highest expected utility at time T, take that action.
The results of AI box game trials disagree.
And what does it do at time T+1? And if you said ‘nothing’, try again, because you have no way of justifying that claim. It may not have intentionally-designed long-term preferences, but just because your eyes are closed does not mean the room is empty.
That doesn’t prove anything, no one has even seen logs. Based on reading what people involved have said about it, I strongly suspect the trick is for the AI to emotionally abuse the gatekeeper until they don’t want to play anymore (which counts as letting the AI out.)
This doesn’t apply to the real world AI, since no one is forcing you to choose between letting the AI out, and listening to it for hours. You can just get up and leave. You can turn the AI off. There is no reason you even have to allow interactivity in the first place.
But Yudkowsky and others claim these experiments demonstrate that human brains are “hackable”. That there is some sentence which, just by reading, will cause you to involuntarily perform any arbitrary action. And that a sufficiently powerful AI can discover it.
At time T+1, it does whatever it thinks will result in the greatest reward at time T+2, and so on. Or you could have it shut off or reset to a blank state.
Enjoy your war on straw, I’m out.