Notice that adding IAIs to the FFAIs does nothing more (according to many ways of resolving disagreements) than reducing the share of resources humanity gets.
But counting on a parliament of FFAIs to be finely balanced to get FAI out of it, without solving FAI along the way… seems a tad optimistic. You’re thinking of “this FFAI values human safety, this one values human freedom, they will compromise on safety AND freedom”. I’m thinking they will compromise on some lobotomy-bunker version of safety while running some tiny part of the brains to make certain repeated choices that technically count as “freedom” according to the freedom-FFAI’s utility.
I’m just brainstorming in the same vein as these posts, of course, so consider the epistemic status of these comments to be extremely uncertain. But, in the limit, if you have a large number of AIs (thousands, or millions, or billions) who each optimize for some aspect that humans care about, maybe the outcome wouldn’t be terrible, although perhaps not as good as one truly friendly AI. The continuity of experience AI could compromise with the safety AI and freedom AI and “I’m a whole brain experiencing things” AI and the “no tricksies” AI to make something not terrible.
Of course, people don’t care about so many aspects with equal weights, so if they all got equal weight, maybe the most likely failure mode is that something people only care about a tiny amount (e.g. not stepping on cracks in the sidewalk) gets equal weight with something people care about a lot (e.g. experiencing genuine love for another human) and everything gets pretty crappy. On the other hand, maybe there are many things that can be simultaneously satisfied, so you end up living in a world with no sidewalk-cracks and where you are immediately matched with plausible loves of your life, and while it may not be optimal, it may still be better than what we’ve got going on now.
The current system overrides difference. We elect a small group of humans to spend the taxes earned by a large group of humans. Your concern is that AIs would override difference. But, where’s your concern for our current system? Why is it ok for humans to override difference but not ok for AIs to override difference? Either you have a double standard… or you don’t realize that you support a system that overrides difference.
I decree that, from this day forward, every discussion has to be about my obsession with taxes. Not really. In case you didn’t get the memo… nobody here is forced to reply to my comments. That I know of. If you were forced to reply to my comments… then please let me know who overrode your difference. I will surely give them a stern and strongly worded lecture on the value of difference.
Of course SA’s concern is that AIs would override difference. Overriding difference means less freedom. If SA wasn’t concerned with AIs turning us humans into puppets… then he wouldn’t be obsessed with AI safety.
My question is… if he’s concerned with having our difference overridden… then why isn’t he concerned with our current system? It’s a perfectly legitimate and relevant question. Why is he ignoring the clear and present danger and focusing instead on an unclear and future danger?
Of course SA’s concern is that AIs would override difference. Overriding difference means less freedom. [...]
I question the accuracy of your mental model of Stuart_Armstrong, and of your reading of what he wrote. There are many ways in which an insufficiently friendly AI could harm us, and they aren’t all about “overriding difference” or “less freedom”. If (e.g.) people are entombed in bunkers, lobotomized and on medical drips, lack of freedom is not their only problem. (I confess myself at a bit of a disadvantage here, because I don’t know exactly what you mean by “overriding difference”; it doesn’t sound to me equivalent to lacking freedom, for instance. Your love of neologism is impeding communication.)
why isn’t he concerned with our current system?
I don’t believe you have any good reason to think he isn’t. All you know is that he is currently posting a lot of stuff about something else, and it appears that this bothers you.
Allow me to answer the question that I think is implicit in your first paragraph. The reason why I’m making a fuss about this is that you are doing something incredibly rude: barging into a discussion that has nothing at all to do with your pet obsession and trying to wrench the discussion onto the topic you favour. (And, in doing so, attacking someone who has done nothing to merit your attack.)
I have seen online communities destroyed by individuals with such obsessions. I don’t think that’s a serious danger here; LW is pretty robust. But, although you don’t have the power to destroy LW, you do (unfortunately) have the power to make every discussion here just a little bit more annoying and less useful, and I am worried that you are going to try, and I would like to dissuade you from doing it.
Notice that adding IAIs to the FFAIs does nothing more (according to many ways of resolving disagreements) than reducing the share of resources humanity gets.
But counting on a parliament of FFAIs to be finely balanced to get FAI out of it, without solving FAI along the way… seems a tad optimistic. You’re thinking of “this FFAI values human safety, this one values human freedom, they will compromise on safety AND freedom”. I’m thinking they will compromise on some lobotomy-bunker version of safety while running some tiny part of the brains to make certain repeated choices that technically count as “freedom” according to the freedom-FFAI’s utility.
I’m just brainstorming in the same vein as these posts, of course, so consider the epistemic status of these comments to be extremely uncertain. But, in the limit, if you have a large number of AIs (thousands, or millions, or billions) who each optimize for some aspect that humans care about, maybe the outcome wouldn’t be terrible, although perhaps not as good as one truly friendly AI. The continuity of experience AI could compromise with the safety AI and freedom AI and “I’m a whole brain experiencing things” AI and the “no tricksies” AI to make something not terrible.
Of course, people don’t care about so many aspects with equal weights, so if they all got equal weight, maybe the most likely failure mode is that something people only care about a tiny amount (e.g. not stepping on cracks in the sidewalk) gets equal weight with something people care about a lot (e.g. experiencing genuine love for another human) and everything gets pretty crappy. On the other hand, maybe there are many things that can be simultaneously satisfied, so you end up living in a world with no sidewalk-cracks and where you are immediately matched with plausible loves of your life, and while it may not be optimal, it may still be better than what we’ve got going on now.
I’ll think about it. I don’t think it will work, but there might be an insight there we can use.
The current system overrides difference. We elect a small group of humans to spend the taxes earned by a large group of humans. Your concern is that AIs would override difference. But, where’s your concern for our current system? Why is it ok for humans to override difference but not ok for AIs to override difference? Either you have a double standard… or you don’t realize that you support a system that overrides difference.
That doesn’t look to me at all like an accurate description of Stuart_Armstrong’s concern.
Please try to understand that not every discussion has to be about your obsession with taxes.
I decree that, from this day forward, every discussion has to be about my obsession with taxes. Not really. In case you didn’t get the memo… nobody here is forced to reply to my comments. That I know of. If you were forced to reply to my comments… then please let me know who overrode your difference. I will surely give them a stern and strongly worded lecture on the value of difference.
Of course SA’s concern is that AIs would override difference. Overriding difference means less freedom. If SA wasn’t concerned with AIs turning us humans into puppets… then he wouldn’t be obsessed with AI safety.
My question is… if he’s concerned with having our difference overridden… then why isn’t he concerned with our current system? It’s a perfectly legitimate and relevant question. Why is he ignoring the clear and present danger and focusing instead on an unclear and future danger?
I question the accuracy of your mental model of Stuart_Armstrong, and of your reading of what he wrote. There are many ways in which an insufficiently friendly AI could harm us, and they aren’t all about “overriding difference” or “less freedom”. If (e.g.) people are entombed in bunkers, lobotomized and on medical drips, lack of freedom is not their only problem. (I confess myself at a bit of a disadvantage here, because I don’t know exactly what you mean by “overriding difference”; it doesn’t sound to me equivalent to lacking freedom, for instance. Your love of neologism is impeding communication.)
I don’t believe you have any good reason to think he isn’t. All you know is that he is currently posting a lot of stuff about something else, and it appears that this bothers you.
Allow me to answer the question that I think is implicit in your first paragraph. The reason why I’m making a fuss about this is that you are doing something incredibly rude: barging into a discussion that has nothing at all to do with your pet obsession and trying to wrench the discussion onto the topic you favour. (And, in doing so, attacking someone who has done nothing to merit your attack.)
I have seen online communities destroyed by individuals with such obsessions. I don’t think that’s a serious danger here; LW is pretty robust. But, although you don’t have the power to destroy LW, you do (unfortunately) have the power to make every discussion here just a little bit more annoying and less useful, and I am worried that you are going to try, and I would like to dissuade you from doing it.
If by “overriding differences” you mean “cause the complete extinction of anything that could ever be called human, for ever and ever”.
And no, I don’t think it’s ok for humans to “cause the complete extinction of anything that could ever be called human, for ever and ever”, either.