NancyL: Is there a body of knowledge about controlling self-modifying programs which could be used as a stepping stone to explaining what would be involved in FAI?
TimT: Self-modifying programs seems like a bit of a red herring. Most likely groups of synthetic agents will become capable of improving the design of machine minds before individual machines can do that. So, you would then have a self-improving ecosystem of synthetic intelligent agents. This probably helps with the wirehead problem …
I agree that a multi-agent systems perspective is the most fruitful way of looking at the problem. And I agree that coalitions are far less susceptible to the pathologies that can arise with mono-maniacal goal systems. A coalition of agents is rational in a different, softer way than is a single unified agent. For example, it might split its charitable contributions among charities. Does that weaker kind of rationality mean that coalitions should be denigrated? I think not.
There is a huge and growing body of knowledge about controlling multi-agent systems. Unfortunately, so far as I know, little of it deals with the scenario in which the agents are busily constructing more agents.
That does happen quite a bit in genetic and memetic algorithms—and artificial life systems.
A coalition of agents is rational in a different, softer way than is a single unified agent. For example, it might split its charitable contributions among charities.
A note, though… if I had a billion dollars and decided just to give it to whoever GiveWell recommended as their top-rated international charities, due to most charities’ difficulty in converting significant extra funds into the same level of effect, I would end up giving 1+10+50+1+0.3+5=67.3 million to 6 different charities and then become confused at what to do with my 932.7 million dollars.
I know the Gates Foundation does look like a coalition of agents rather than a single agent, but it doesn’t look like a coalition of 7549+ agents. I’d guess at most about a dozen and probably fewer Large Components.
I agree that a multi-agent systems perspective is the most fruitful way of looking at the problem. And I agree that coalitions are far less susceptible to the pathologies that can arise with mono-maniacal goal systems. A coalition of agents is rational in a different, softer way than is a single unified agent. For example, it might split its charitable contributions among charities. Does that weaker kind of rationality mean that coalitions should be denigrated? I think not.
To answer Nancy’s question, there is a huge and growing body of knowledge about controlling multi-agent systems. Unfortunately, so far as I know, little of it deals with the scenario in which the agents are busily constructing more agents.
That does happen quite a bit in genetic and memetic algorithms—and artificial life systems.
I checked with the Gates Foundation. 7549 grants and counting!
It seems as though relatively united agents can split their charitable contributions too.
A note, though… if I had a billion dollars and decided just to give it to whoever GiveWell recommended as their top-rated international charities, due to most charities’ difficulty in converting significant extra funds into the same level of effect, I would end up giving 1+10+50+1+0.3+5=67.3 million to 6 different charities and then become confused at what to do with my 932.7 million dollars.
I know the Gates Foundation does look like a coalition of agents rather than a single agent, but it doesn’t look like a coalition of 7549+ agents. I’d guess at most about a dozen and probably fewer Large Components.
Their fact sheet says 24 billion dollars.