I find it strange that the potential for political bias is seen as so much worse than a self imposed ban on The-Subject-Which-Must-Not-Be-Discussed. Is intellectual evasion really seen as preferable to potential bias?
The Subject Which Must Not Be Discussed ? Is that still a thing? (infohazard related to Super AIs?)
I can see two other reasons. The first is that a culture WILL develop, and if outsiders see the political culture, we might not get a chance to teach them enough rationality for them to not be mindkilled instantly.
The second is that it’s well established that smart people often believe wierd and/or untrue things. This, combined with the lack of respect for political correctness (in both the old-timey ‘within the realm of policy you can actually talk about’ and in the modern offensive language sense) and contrarianism, and a cultural site, could result in really bad politics.
We’ve got to deal with politics eventually. The whole world isn’t going to listen to the Singularity Institute just because they’ve got a Friendly AI, and it’s not like those cognitive biases will disappear by that time. Besides, I feel like LW could get more done with discussions about political brainstorming, at least in the near future.
The whole world isn’t going to listen to the Singularity Institute just because they’ve got a Friendly AI
If an AGI wants you to listen, you won’t have any choice. If it doesn’t want you to listen, you won’t have the option. The set of “problems for us after we get FAI” is the null set.
That doesn’t sound very likely to me for either a friendly or an unfriendly AI. Letting people feel disenfranchised might be bad Fun Theory, but it would take a lot more than distribution of relevant information to get ordinary, biased humans to stop fucking up our own society.
As a general rule, I’d say that if a plan sounds unlikely to effectively fix our problems, an FAI is probably not going to do that.
That doesn’t sound like something I’d infer from his previous comment
We’ve got to deal with politics eventually. The whole world isn’t going to listen to the Singularity Institute just because they’ve got a Friendly AI, and it’s not like those cognitive biases will disappear by that time.
The whole world isn’t going to listen to the Singularity Institute just because they’ve got a Friendly AI
‘Just’ because they’ve got an FAI? Once you have an FAI (and nobody else has a not-friendly-to-you-AI) you’ve more or less won already.
We’ve got to deal with politics eventually.
Apart from being able to protect against any political threat (and so make persuasion optional, not necessary) an FAI could, for example, upgrade Eliezer to have competent political skills.
The politics that MIRI folks would be concerned about are the politics before they win, not after they win.
Work done by Lesswrongians could decrease the workload of such an FAI while providing immediate results. If it takes twenty years for such a thing to be developed, that’s twenty years in either direction on the good/bad scale civilization could go. This could be the difference of an entire year that it takes an FAI to implement whatever changes to make society better.
This could be the difference of an entire year that it takes an FAI to implement whatever changes to make society better.
You are not taking AI seriously. Is this intentional?
A superintelligence could likely take over the world in a matter of days, no matter what people thought. (They would think it was great, because the AI could manipulate them better than the best current marketing tactics, even if it couldn’t just rewrite their brains with nano.)
It may not do this, for the sake of our comfort, but if anything was urgent, it would be done.
A superintelligence could likely take over the world in a matter of days, no matter what people thought. (They would think it was great, because the AI could manipulate them better than the best current marketing tactics, even if it couldn’t just rewrite their brains with nano.)
While I wouldn’t dismiss this possibility at all you seem a little overconfident. The best current marketing tactics can shift market share a percentage point or two or maybe make a half-percentage-point difference in a political campaign. Obviously better than the best is better. But assuming ethical limitations on persuasion tactics and general human suspicion of new things “days” seems pretty optimistic (and twenty-years pessimistic). No good reason to think the persuasive power of marketing is at all linear with the intelligence of the creator. We ought to have very large error bars on this kind of thing and while the focus on these fast take-over scenarios makes sense for emphasizing risk that focus will make them appear more likely to us than they actually are.
The objection is mind-killing and agent-reputational effects, not incivility.
I find it strange that the potential for political bias is seen as so much worse than a self imposed ban on The-Subject-Which-Must-Not-Be-Discussed. Is intellectual evasion really seen as preferable to potential bias?
If one doesn’t know, it is better to know that one doesn’t know.
The Subject Which Must Not Be Discussed ? Is that still a thing? (infohazard related to Super AIs?)
I can see two other reasons. The first is that a culture WILL develop, and if outsiders see the political culture, we might not get a chance to teach them enough rationality for them to not be mindkilled instantly.
The second is that it’s well established that smart people often believe wierd and/or untrue things. This, combined with the lack of respect for political correctness (in both the old-timey ‘within the realm of policy you can actually talk about’ and in the modern offensive language sense) and contrarianism, and a cultural site, could result in really bad politics.
We’ve got to deal with politics eventually. The whole world isn’t going to listen to the Singularity Institute just because they’ve got a Friendly AI, and it’s not like those cognitive biases will disappear by that time. Besides, I feel like LW could get more done with discussions about political brainstorming, at least in the near future.
If an AGI wants you to listen, you won’t have any choice. If it doesn’t want you to listen, you won’t have the option. The set of “problems for us after we get FAI” is the null set.
Kind of, almost. It could be that we (implicitly) choose to have problems for ourselves.
In case it’s not clear. This means the FAI causing problems for us on our behalf, not literally making a choice we are aware of.
(Or ‘choosing not to intervene to solve all problems’. The difference matters to some, even if it is somewhat arbitrary.)
Are you saying that an AGI would distribute relevant information to the public, compelling them to make sound political choices?
That doesn’t sound very likely to me for either a friendly or an unfriendly AI. Letting people feel disenfranchised might be bad Fun Theory, but it would take a lot more than distribution of relevant information to get ordinary, biased humans to stop fucking up our own society.
As a general rule, I’d say that if a plan sounds unlikely to effectively fix our problems, an FAI is probably not going to do that.
I thought he was saying that once you have a Super AI, you don’t have to deal with politics.
That doesn’t sound like something I’d infer from his previous comment
‘Just’ because they’ve got an FAI? Once you have an FAI (and nobody else has a not-friendly-to-you-AI) you’ve more or less won already.
Apart from being able to protect against any political threat (and so make persuasion optional, not necessary) an FAI could, for example, upgrade Eliezer to have competent political skills.
The politics that MIRI folks would be concerned about are the politics before they win, not after they win.
Work done by Lesswrongians could decrease the workload of such an FAI while providing immediate results. If it takes twenty years for such a thing to be developed, that’s twenty years in either direction on the good/bad scale civilization could go. This could be the difference of an entire year that it takes an FAI to implement whatever changes to make society better.
You are not taking AI seriously. Is this intentional?
A superintelligence could likely take over the world in a matter of days, no matter what people thought. (They would think it was great, because the AI could manipulate them better than the best current marketing tactics, even if it couldn’t just rewrite their brains with nano.)
It may not do this, for the sake of our comfort, but if anything was urgent, it would be done.
While I wouldn’t dismiss this possibility at all you seem a little overconfident. The best current marketing tactics can shift market share a percentage point or two or maybe make a half-percentage-point difference in a political campaign. Obviously better than the best is better. But assuming ethical limitations on persuasion tactics and general human suspicion of new things “days” seems pretty optimistic (and twenty-years pessimistic). No good reason to think the persuasive power of marketing is at all linear with the intelligence of the creator. We ought to have very large error bars on this kind of thing and while the focus on these fast take-over scenarios makes sense for emphasizing risk that focus will make them appear more likely to us than they actually are.