So I’ll just call it how I see it: Do you want to make self-improving AGI a reality? Then we’ll have to find a way to make it happen without involving public opinion in this decision. They’ll never consent, no matter how honest and detailed and soulful you pitch our awesome cause.
Really? That’s not the impression I got from those numbers at all. To me, it sounds less like the public is adamantly resolved to stick with those entrenched ideas, and more like most people will believe all sorts of insane bullshit of you can spin a plausible-sounding explanation of how they might benefit by believing in it, and if you persist long enough. Do you really think the vote would be a one-time thing?
To me, it sounds less like the public is adamantly resolved to stick with those entrenched ideas, and more like most people will believe all sorts of insane bullshit of you can spin a plausible-sounding explanation of how they might benefit by believing in it, and if you persist long enough.
There may be something to that perspective, but I think it is unrealistic to expect we could change enough people’s minds in so short of a time-frame. There’s a lot of people out there. Religions have had thousands of years to adapt themselves in such a way, that they reinforce and play into people’s innate superstitions and psychological desires. In turn, religions also shaped people’s culture and until very recently they played the major role in the “nurture” side of the “nature and nurture” make-up of people. Competing with religion on our own terms (rationality) simply won’t work so well with the majority of people.
Understanding our AGI “message” requires various quantum leaps in thinking and rationality. These insights implicitly and explicitly challenge most innate intuitions about reality and humanity that people currently hold on to. I’m not saying there won’t be many people we could be able to persuade without a thorough education in these matters, but because in contrast to religion our “worldview” doesn’t tell people what deep down they would like to hear and believe, we’re less attractive to those who just can’t be arsed into rationality. Which are a lot o people.
In conclusion, I’ll sum up my basic point in another light yet again: I think I’m not confronting us with a false dichotomy, when I say that there are essentially only two possibilities when it comes to introducing AGI into the lives of people:
EITHER we’re willing to adhere to public consent along current democratic principles. This would entail, that we massively concern ourselves with public opinion and make a commitment to not unleash AGI, unless the absolute majority of all citizens on this planet (or those who we consider to meet the criteria of valid consent) approve of our plan.
OR, we take the attitude that people who do not meet a certain standard of rationality have no business in shaping humanity’s future and we become comfortable with deciding over their heads/on their behalf. This second option certainly does not light up any applause lights for believers in democracy, but I believe among lesswrongers this may not be that much of an unpopular notion.
You can’t have it both ways: Either you commit yourself to the insane effort of making sure, that the issue of AGI gets decided in a democratic and “fair” fashion, or you aim at some “morally” “lower” standard and are okay with not everyone getting their say when it comes to this issue. As far as I’m concerned you know my current preference, which I favor because I find the alternative to be completely unrealistic and I’m vastly more committed to rationality than I am to the idea that undiscriminating democracy is the gold standard of decision-making.
What about representative democracy? Any given community sends off a few of it’s cleverest individuals with a mandate to either directly argue for that community’s interests, or to select another tier of representatives who will do so. Nobody feels completely excluded, but only a tiny fraction of the overall population actually needs to be educated and persuaded on the issues.
How is it representative, if only the cleverest individuals are chosen? That would rather be elitism. If actually only the most rational people with herculean minds would decide, they should theoretically unanimously agree to either do it or not do it anyway, based on a sound probability-evaluation and shared premises based in reality that they all agree on.
If those “representative” individuals were democratically determined by vote, then these people most certainly won’t be the most intelligent and rational people, but those best at rhetorically convincing others and sucking up to them by exploiting their psychological shortcomings. They would simply be politicians like the ones we have nowadays.
So in a way we’re where we started. If people don’t decide for themselves, they’ll simply vote for someone who represents their (or provides them with a new) uninformed opinion. Whoever wins such an election will not be the most rational person, that’s for sure (remember when America voted twice for an insane cowboy?)
While representative democracy is certainly more practical than the alternatives, I doubt the outcome would be all that better. If we want the most rational and intelligent people to make this decision, then these individuals couldn’t be chosen by vote but only by another “elitist” group. I don’t know how the public would react to that—I suppose they would not be flattered.
I’m not saying it would be a better system overall, just that a relatively small group of politicians would be comparatively easier for us to educate and/or bribe.
I’m still puzzled though which approach would be better… involving and educating the politicians (there are many who wouldn’t understand) or trying to keep them out as long as possible to avoid confrontations and constraints? I already remarked somewhere, that I would find some kind of international effort towards AGI development very preferable, something comparable to CERN would be brilliant. Such a team could first work towards human-level AI and then one-up themselves with self-improving AGI once they gained some trust for their competence.
In other words, perhaps advertising and reaching the “low-hanging fruit” of human-level AI plus reaping the amazing benefits of such a breakthrough will raise public and political trust in them, as opposed to some “suspicious” corporation or national institute that suddenly builds potential “weapons” of mass destruction.
Really? That’s not the impression I got from those numbers at all. To me, it sounds less like the public is adamantly resolved to stick with those entrenched ideas, and more like most people will believe all sorts of insane bullshit of you can spin a plausible-sounding explanation of how they might benefit by believing in it, and if you persist long enough. Do you really think the vote would be a one-time thing?
There may be something to that perspective, but I think it is unrealistic to expect we could change enough people’s minds in so short of a time-frame. There’s a lot of people out there. Religions have had thousands of years to adapt themselves in such a way, that they reinforce and play into people’s innate superstitions and psychological desires. In turn, religions also shaped people’s culture and until very recently they played the major role in the “nurture” side of the “nature and nurture” make-up of people. Competing with religion on our own terms (rationality) simply won’t work so well with the majority of people.
Understanding our AGI “message” requires various quantum leaps in thinking and rationality. These insights implicitly and explicitly challenge most innate intuitions about reality and humanity that people currently hold on to. I’m not saying there won’t be many people we could be able to persuade without a thorough education in these matters, but because in contrast to religion our “worldview” doesn’t tell people what deep down they would like to hear and believe, we’re less attractive to those who just can’t be arsed into rationality. Which are a lot o people.
In conclusion, I’ll sum up my basic point in another light yet again: I think I’m not confronting us with a false dichotomy, when I say that there are essentially only two possibilities when it comes to introducing AGI into the lives of people:
EITHER we’re willing to adhere to public consent along current democratic principles. This would entail, that we massively concern ourselves with public opinion and make a commitment to not unleash AGI, unless the absolute majority of all citizens on this planet (or those who we consider to meet the criteria of valid consent) approve of our plan.
OR, we take the attitude that people who do not meet a certain standard of rationality have no business in shaping humanity’s future and we become comfortable with deciding over their heads/on their behalf. This second option certainly does not light up any applause lights for believers in democracy, but I believe among lesswrongers this may not be that much of an unpopular notion.
You can’t have it both ways: Either you commit yourself to the insane effort of making sure, that the issue of AGI gets decided in a democratic and “fair” fashion, or you aim at some “morally” “lower” standard and are okay with not everyone getting their say when it comes to this issue. As far as I’m concerned you know my current preference, which I favor because I find the alternative to be completely unrealistic and I’m vastly more committed to rationality than I am to the idea that undiscriminating democracy is the gold standard of decision-making.
What about representative democracy? Any given community sends off a few of it’s cleverest individuals with a mandate to either directly argue for that community’s interests, or to select another tier of representatives who will do so. Nobody feels completely excluded, but only a tiny fraction of the overall population actually needs to be educated and persuaded on the issues.
How is it representative, if only the cleverest individuals are chosen? That would rather be elitism. If actually only the most rational people with herculean minds would decide, they should theoretically unanimously agree to either do it or not do it anyway, based on a sound probability-evaluation and shared premises based in reality that they all agree on.
If those “representative” individuals were democratically determined by vote, then these people most certainly won’t be the most intelligent and rational people, but those best at rhetorically convincing others and sucking up to them by exploiting their psychological shortcomings. They would simply be politicians like the ones we have nowadays.
So in a way we’re where we started. If people don’t decide for themselves, they’ll simply vote for someone who represents their (or provides them with a new) uninformed opinion. Whoever wins such an election will not be the most rational person, that’s for sure (remember when America voted twice for an insane cowboy?)
While representative democracy is certainly more practical than the alternatives, I doubt the outcome would be all that better. If we want the most rational and intelligent people to make this decision, then these individuals couldn’t be chosen by vote but only by another “elitist” group. I don’t know how the public would react to that—I suppose they would not be flattered.
I’m not saying it would be a better system overall, just that a relatively small group of politicians would be comparatively easier for us to educate and/or bribe.
Yes, that is true.
I’m still puzzled though which approach would be better… involving and educating the politicians (there are many who wouldn’t understand) or trying to keep them out as long as possible to avoid confrontations and constraints? I already remarked somewhere, that I would find some kind of international effort towards AGI development very preferable, something comparable to CERN would be brilliant. Such a team could first work towards human-level AI and then one-up themselves with self-improving AGI once they gained some trust for their competence.
In other words, perhaps advertising and reaching the “low-hanging fruit” of human-level AI plus reaping the amazing benefits of such a breakthrough will raise public and political trust in them, as opposed to some “suspicious” corporation or national institute that suddenly builds potential “weapons” of mass destruction.