One career path I’m sort of musing about is working to create military robots. After all, the goals in designing a military robot are similar to those in designing Friendly AI: the robot must know somehow who it’s okay to harm and what “harm” is.
Does this seem like a good sort of career path for someone interested in Friendly AI?
If you work on AGI and you make actual progress, then you have a moral obligation to keep it away from people who can’t be trusted with it. You cannot satisfy this obligation while working for a military or a military contractor.
I’m not an expert, but I don’t think there is much more overlap with FAI than other domain AI projects have. The problems for military robots probably are more of the machine vision kind than of the meta-ethics kind.
Am I the only one to think that no, creating military robots isn’t a “good career path” towards friendly AI, because creating military robots is inherently unfriendly to humanity? Especially if you live in the US and know that your robots will be used in aggressive wars against poorer countries. It’s some kind of crazy ethical blindness that most Americans seem to have for some reason, where “our guys” are human beings, but arbitrarily chosen foreigners deserve whatever they get… Just like this incident I saw on HN when one guy asked about career prospects working for the occupation force in Iraq, and another answered that it’ll be an “amazing and unique experience”. You’ll note my reply there was much more concise.
It’s some kind of crazy ethical blindness that most Homo sapiens seem to have for some reason, where “our guys” are human beings, but arbitrarily chosen foreigners deserve whatever they get
Fixed it for you.
And the reason is evolved psychological instincts with pretty obvious selection benefits.
I don’t think that’s an accurate correction. Because America is the current hegemonic power Americans can get away with feeling that other nations aren’t “real” in the sense the USA are. For example when considering some hypothetical situation that would concern the whole planet an American might only consider how the USA would react, while anyone else in the same situation would in addition to the reaction of their own nation at the very leasts also have to consider how the USA reacts, and might even consider other nations since their situation is more obviously symmetrical to their own.
Because America is the current hegemonic power Americans can get away with feeling that other nations aren’t “real” in the sense the USA are.
I’m afraid I don’t know what this means.
For example when considering some hypothetical situation that would concern the whole planet an American might only consider how the USA would react, while anyone else in the same situation would in addition to the reaction of their own nation at the very leasts also have to consider how the USA reacts, and might even consider other nations since their situation is more obviously symmetrical to their own.
There might be pragmatic realities that force non-Americans to consider the reactions of foreigners more than Americans must. Americans have two oceans and the world’s strongest military to keep a lot foreign troubles far away, other people do not. But this isn’t evidence that Americans care less about foreigners than those from other countries do. It sounds like you’re talking about a political blindness instead of an ethical blindness. Besides, there is equally good reason to think America’s hegemonic status makes Americans more worried about foreign goings-on since American lives and American business concerns are more often at stake.
Not “real” is the best description I have. You could say having the same sort of attitude towards other nations you might have towards Oz, Middle Earth or the Empire from Star Wars even though you intellectually know that they really exist, but that only comes close to what I mean. I must stress that not all Americans have this attitude, but some seem to do, and thats enough to influence the discourse.
But this isn’t evidence that Americans care less about foreigners than those from other countries do. It sounds like you’re talking about a political blindness instead of an ethical blindness.
I was thinking more of e. g. first contact situations in SF stories and things like that, not necessarily normal international politics, but I think it extends to all fields: Domestic politics (the amount and the kind of consideration the fact that a policy seems to work well somewhere else gets), pop culture, sports, science, language learning, wherever one might consider other nations Americans have more leeway not to do so. This doesn’t by necessity have to extend to ethical considerations, but when cousin_it observes that it appears to it seems inappropriate to me to “correct” that out.
I must stress that not all Americans have this attitude, but some seem to do, and thats enough to influence the discourse.
Exactly zero evidence has been presented that Americans have this ill-defined attitude at a higher rate that non-Americans.
wherever one might consider other nations Americans have more leeway not to do so.
No reason given to think this is the case on balance.
This doesn’t by necessity have to extend to ethical considerations, but when cousin_it observes that it appears to it seems inappropriate to me to “correct” that out.
The obvious and straight forward interpretation of cousin it’s comment was that he was referring to American nationalism. A real and quite common phenomenon in which Americans don’t give a lick about people who don’t live their country (in civilized places this is referred to as racism). I’ve met plenty of people with this view. It is a disgusting and immoral attitude. That said, it is a near ubiquitous attitude. Humans have been killing humans from other groups and not giving a shit for as long as there have been humans. We’re good at it. Really good. We do it like it’s our job. In no way is this unique to residents or citizens of the United States of America. If cousin_it meant something else he can clarify. He’s been commenting elsewhere throughout this conversation anyway.
It is a disgusting and immoral attitude. That said, it is a near ubiquitous attitude. Humans have been killing humans from other groups and not giving a shit for as long as there have been humans. We’re good at it. Really good. We do it like it’s our job. In no way is this unique to residents or citizens of the United States of America. If cousin_it meant something else he can clarify. He’s been commenting elsewhere throughout this conversation anyway.
Yes! Thank you! Finally, a human user says what I’ve been trying to say all along! (See for example here.)
On my first visit to Earth (or perhaps the first visit of one of my copies before a reconciliation), my reaction was (translated from the language of my logs):
“The Alpha species [i.e. humans] inflicts disutility on its members based on relative skin redness. I’m silver. Exit!”
The obvious and straight forward interpretation of cousin it’s comment was that he was referring to American nationalism. A real and quiet common phenomenon in which Americans don’t give a lick about people who don’t live their country (in civilized places this is referred to as racism). I’ve met plenty of people with this view. It is a disgusting and immoral attitude. That said, it is a near ubiquitous attitude. Humans have been killing humans from other groups and not giving a shit for as long as there have been humans. We’re good at it. Really good. We do it like it’s our job. In no way is this unique to residents or citizens of the United States of America. If cousin_it meant something else he can clarify. He’s been commenting elsewhere throughout this conversation anyway.
While all what you say about nationalism is true It’s not obvious to me that it explains what cousin_it was talking about, at least not to its full extent. Degradation of other people through nationalism usually evokes hate (“those damned X!”), while the linked comment seemed too cheerful for that, it’s not like it encouraged to “help show it to those stinkin’ Arabs” or anything like that. As if the fact that someone might be hurt simply didn’t occur to them. There has been plenty of that in other historical cases of nationalism, but I think usually only in similarly asymmetrical situations. Nationalism in symmetrical situations seems to be of the plain hate kind.
Degradation of other people through nationalism usually evokes hate (“those damned X!”), while the linked comment seemed to cheerful for that, it’s not like it encouraged to “help show it to those stinkin’ Arabs” or anything like that, like the fact that someone might be hurt simply didn’t occur to them.
Nationalism almost always displays as willful ignorance or apathy about the condition of those outside the nation. It’s nation-centrism, in other words. Hatred is an extreme case (thus the moniker “ultra-nationalism”).
Nationalism in symmetrical situations seems to be of the plain hate kind.
This just isn’t true. At all. I’m not even sure where you would get it. There are nationalists all around the world who do not express hate toward other nations, even in cases of power symmetries.
More importantly: Why are we arguing about this? Cousin_it isn’t some old philosopher or public intellectual who we can’t reach for clarification. If he wants to correct my understanding of his comment let him do it.
Sorry for taking so much time to reply. FAWS is right, I’m not saying Americans hate foreigners. It’s more like a blindness or deafness. See my link above to the “amazing and unique experience” guy. The ethical angle of the situation simply doesn’t occur to him, it’s as if Iraqis were videogame characters. America’s fighting an aggressive war and killed umpteen thousand people?… uh, okay man, I got a career to advance and I wanna go someplace exotic, like expand my horizons and shit. I’ve never heard anything like that from Russians or anyone else except Americans, though I’d be the first to agree that we Russians are quite nationalistic.
Nationalism almost always displays as willful ignorance or apathy about the condition of those outside the nation.
The original disagreement wasn’t about the term nationalism (and I never claimed that nationalism didn’t explain it, only that what you said about nationalism up to that point didn’t), so you seem to be arguing my point here: For the reasons I described it’s easier for Americans to be “ignorant about the condition of those outside the nation”.
This just isn’t true. At all. I’m not even sure where you would get it. There are nationalists all around the world who do not express hate toward other nations, even in cases of power symmetries.
You can’t keep hurting someone and not even notice you do in a symmetrical conflict because they will hurt you back, and then you will want revenge in turn.
More importantly: Why are we arguing about this?
You seem to be of the opinion that you can’t even coherently/rationally (?) think a certain thing and I disagree. That disagreement is independent of the question whether anyone had actually been thinking that.
EDIT: Nation-centrism is close to what I meant with not feeling that other nations are “real”.
For the reasons I described it’s easier for Americans to be “ignorant about the condition of those outside the nation”.
“willful” ignorance… Do we really need to spend time distinguishing nationalism from the fact that the US gets the NBA?
You can’t keep hurting someone and not even notice you do in a symmetrical conflict because they will hurt you back, and then you will want revenge in turn.
So what you want to claim is that asymmetrical conflict is more likely than symetrical conflict to lead to people in one country being ignorant of the animosity against them in the other country. This is plausible though several counterexamples come to mind and I’m not sure it applies since a large portion of American nationalists appear to conceive of the conflict as a symmetrical one (this has been a minor issue in American politics, of course). I’m not sure I see how this issue relates to nationalism exactly and what it’s relevance is. But as you can see below I’m not sure I understand what you’re claiming at this point.
You seem to be of the opinion that you can’t even coherently/rationally (?) think a certain thing and I disagree. That disagreement is independent of the question whether anyone had actually been thinking that.
WHAA? This is incredibly vague and confusing. I honestly have no idea what you’re talking about.
“willful” ignorance… Do we really need to spend time distinguishing nationalism from the fact that the US gets the NBA?
And the fact that you neither need to make any significant sacrifices nor engage in double-think doesn’t make willful ignorance easier?
So what you want to claim is that asymmetrical conflict is more likely than symetrical conflict to lead to people in one country being ignorant of the animosity against them in the other country.
Not really. The term nationalism is unhelpful. There seem to be at least two kinds, the we’re-great-don’t-care-about-anyone-else nation-centric one, and unite-against-the-enemy-us-or-them kind. My point is that being a hegemonic power facilitates the nation-centric kind. The sub-point that a hot symmetric conflict turns nationalism into the second kind pretty much by necessity even if it started out as the first kind. An asymmetric conflict of course allows either kind in the stronger party, presumably that’s what your counter-examples show.
WHAA? This is incredibly vague and confusing. I honestly have no idea what you’re talking about.
Presumably you detected a feature that made the post knowably correctable. If that feature wasn’t an incoherent or irrational (in light of further evidence you have available) opinion, what was it?
A real and quiet common phenomenon in which Americans don’t give a lick about people who don’t live their country (in civilized places this is referred to as racism).
That sounds like nationalism rather than racism to me. The country you live in has only a loose correlation with the colour of your skin. If people favoured countries which had a strong majority of people of a particular ethnicity that might be evidence for racism.
I was speaking loosely in the parenthetical. Nationalism has a strong tendency to manifest as racism and racism has a similar tendency to manifest as nationalism. They’re highly correlated but yes, conceptually distinct.
No reason given to think this is the case on balance.
Because I thought it would be obvious enough.
Americans are less likely to learn foreign languages, most Americans don’t even have a passport, it’s easier to write a science paper without referencing any non-American research (not that I think this done at a significant rate, but the equivalent would be unthinkable elsewhere), foreign movies are generally either ignored or remade (and set in the USA if possible), foreign trade is a smaller percentage of GDP than just about any other developed nation, it’s possible to “buy American” for a greater range of products than the equivalent anywhere else, America has the top leagues for the sports it cares about (it’s not just that America cares for different sports than the rest of the world, for almost all countries the top level of the sport that country cares most about is at least in part played elsewhere so a soccer fan in e. g. Romania has to pay attention to the English Premier League, the Spanish Premiera Divison etc. [and even the English and Spanish fans have incentive to pay attention to each others league because they are at roughly equal level and the top teams regularly play each other]. If America cared about soccer the top league would be there so Americans still wouldn’t have any reason to pay attention to foreign sports).
I think most of those things could be expected regardless of whether America has any such putative hegemonic status. Most Americans don’t have passports because they can’t afford to travel to another continent, and the number is rising now that passports are required to visit other countries in North America. Getting a passport in the US is a fairly annoying, expensive process, so I’m not surprised most people haven’t bothered. Ditto with the foreign languages—most Americans don’t meet or talk to people who don’t speak American.
I haven’t been able to find a source online—do most Chinese people speak foreign languages and have passports? Are they required?
Most Americans don’t have passports because they can’t afford to travel to another continent, and the number is rising now that passports are required to visit other countries in North America. Getting a passport in the US is a fairly annoying, expensive process, so I’m not surprised most people haven’t bothered.
Getting a passport is a bother everywhere, the point is that Americans don’t really need a passport because their country is huge, rich and powerful and they can take a vacation in whatever climate they like without ever leaving their borders. People in other developed nations would have to make much greater sacrifices to never travel abroad.
Ditto with the foreign languages—most Americans don’t meet or talk to people who don’t speak American.
That’s exactly my point! They can do that without missing all that much, unlike most of the planet.
do most Chinese people speak foreign languages
IIRC compulsory foreign language instruction (mostly in English) starts in third grade, and many educated Chinese learn a third/fourth language later. For many Chinese Mandarin is effectively a L2 language so they know their native dialect, Mandarin and some English. The state of English learning is mostly horrible and only a minority can communicate effectively, but I’d think that Chinese on average speak better English than non-native-speaker Americans speak Spanish and the difficulty is much greater.
I’m not all that clear about the passport situation/foreign travel and China is a bad example anyway because it is itself an enormous country and very “nation-centric”, but a huge number of Chinese study abroad, while there is no comparable reason for Americans to do so because they already have many of the most prestigious universities.
Why was this voted down? Was there anything in this post that isn’t either objectively true (Americans have more leeway to ignore other nations) or clearly marked as speculation (“seem to”)? Is it inherently irrational to consider the hypothesis that cousin_it’s observation was meant exactly as stated, and then to speculate about what might be behind this observation?
“War is bad, the military industrial complex is evil,” sounds good, and it hits all the right emotional buttons (care for humanity, etc.), but it is not necessarily true when all of the costs and benefits are taken into account. A defensive military allows intellectual, cultural, economic, and artistic endeavors to flourish without fear of attack. Destruction of infrastructure can open the way for rebuilding into a far better environment, and massive war spending can push the boundaries of technology. Reshaping political landscapes can cause huge culture shifts through decades which may result in much more open, and better, societies.
Suffering is terrible; death is abhorrent; and the benefits are uncertain enough, they should not be used as arguments to start an otherwise preventable war. But I do not see how we can appropriately judge the complex results of “war in general” on the timeline of decades or centuries.
What I can certainly agree with is that contributing to the military is bad on the margins, since it’s already getting more than its share of resources thanks to others of a more bloodthirsty bent.
A defensive military allows intellectual, cultural, economic, and artistic endeavors to flourish without fear of attack.
At this point I laughed with a kind of sad laugh. Everyone who thinks America will use military robots for self-defense, raise your hands! On the other hand, you’ve made a wonderful argument that a strong offensive US military stifles cultural/economic/artistic endeavours worldwide due to fear of attack, though I’m sure you didn’t mean to.
Everyone who thinks America will use military robots for self-defense, raise your hands!
They will use them for defense as well as for offense. I’ve seen several articles already of American cities ready to purchase military drones for law enforcement purposes, and I would be very surprised if they were not also added to strategic military bases within America to defend against potential attackers. At the very least, when countries are making strategy decisions that may involve the military, the mere existence of drones will serve as a deterrence.
On the other hand, you’ve made a wonderful argument that a strong offensive US military stifles cultural/economic/artistic endeavours worldwide due to fear of attack, though I’m sure you didn’t mean to.
My point was to state the necessity of defense. If there are strong, warlike countries with military drones, such as the United States, then other countries had better start developing countermeasures to protect themselves. That, or ally themselves with the strong country in the hopes of falling under their protection rather than their ire. As such, staying ahead of the other countries is a valid strategy.
And I would certainly agree that US aggressiveness is stifling those very things in Iraq, Afghanistan, Iran, etc. The word ‘fear’ was poorly chosen. I was thinking more of what happened to Tibet and all those pacifists when they failed to muster an appropriate military defense: actual invasion and displacement or destruction.
I’ve seen several articles already of American cities ready to purchase military drones for law enforcement purposes
Oddly I don’t seem to have a reference handy, but several US cities already use robots in law enforcement. iRobot and Foster-Miller really took off after the success of their robot volunteers at the WTC.
However, I was being honest with my questions. I’d like to know what sort of utilon adjustments people assign to these different situations, even if it’s just a general weighting like ‘high’ or ‘low’.
As I see it, it’s less about how much harm those specific things do, and more about how viable the alternatives are. I expect that all governments makes tax avoidance/evasion difficult, and I suspect that paying taxes to any government will support a military. The lifestyle changes involved in actually living sustainably (as opposed to being ‘slightly better than the US average’ or applying greenwash) seem pretty significant and possibly unattainable for most of us, as well. (I could be wrong on the latter in a general sense; I haven’t looked into it, since I’m already relatively sure that it’s beyond what I, personally, could manage.) Given that Warrigal was asking about the career move, though, I expect that he does have other viable options that could be pursued without completely turning his life upside down, and that’s a significant difference between this decision and the other two.
As I see it, it’s less about how much harm those specific things do, and more about how viable the alternatives are.
How viable, given that you want to live in relative comfort and ease. But if a true valuation is made, then perhaps that should not be taken as given, considering the costs.
There are various arguments that building military robots is bad, but I don’t think you’ve touched on any good ones. When you look at how unreliable human soldiers are on the field, creating military robots just seems like an obvious way to make things better for everyone involved. Fewer American casualties because we’re using robots, and fewer civilian casualties because the robots are better at not shooting at civilians.
Also, FWIW, most military robots currently aren’t the sort that shoot people—they do things like look around corners, draw fire, perform aerial surveillance, and detect/defuse bombs.
It’s some kind of crazy ethical blindness that most Americans seem to have for some reason, where “our guys” are human beings, but arbitrarily chosen foreigners deserve whatever they get...
Then you wrote:
...an obvious way to make things better for everyone involved. Fewer American casualties because we’re using robots, and fewer civilian casualties because the robots are better at not shooting at civilians.
This happens to pixel-perfectly demonstrate my point about ethical blindness. Reread my quote again, then your quote, then mine, then yours again. Notice anything wrong? Anything missing?
You see, you omitted one pretty important group: everyone America calls “enemy combatants”. If you think all of them are bad people and deserve to die, then you obviously don’t get it. Repeat after me: America Starts Aggressive Wars. Then say it again because it’s true and truth won’t suffer from repetition. Say it as many times as you need to make it sink in, then come back and we will resume this discussion.
America will be killing those people with or without robots. We already have ways of wiping all of the enemy combatants off the map if we want to (for example nukes). Military technology is primarily about finding ways to 1) kill fewer of our own soldiers and 2) kill fewer people who aren’t enemy combatants.
America will be killing those people with or without robots
Not necessarily. All else equal, the less it costs to wage a war (in money, American lives, and good will), the more more likely leaders are to actually start one.
Ignoring the question whether that’s desirable or not (politics is the mindkiller) reducing the cost of killing those people will lead to more of those people killed in marginal situations where such considerations matter.
Yes, that’s one of the good arguments against robot soliders I mentioned above. We’re more likely to not care about the fate of our robot soliders, and so would be less hesitant to send them into battle. Though it’s still an open question whether that effect would trump any increased monetary cost per soldier (if any) and whether the other benefits outweigh such concerns.
Human soldiers perform horribly in terms of following the rules of war, and above that do absolutely horrible things sometimes.
You don’t even have to go as far as “America Starts Aggressive Wars”—“Under the right conditions, America is capable of starting aggressive wars, and is more likely to do so if the cost of doing so is lowered.”
Look, I get the “Politics is the Mind Killer” mantra, and I agree that it would be fruitless to start a debate about something like abortion here—it comes down to definitions and conventions about what is moral.
But when something is actually, demonstrably, true, refusing to look at and examine the truth because it is painful to do so is not compelling. It doesn’t even trigger most of the reasons in “politics is the mindkiller”—both major U.S. Political parties are just fine with most of the examples. The only two teams that can credibly be put in opposition here are “U.S.A.” and “Everyone else”.
You don’t even have to go as far as “America Starts Aggressive Wars”—“Under the right conditions, America is capable of starting aggressive wars, and is more likely to do so if the cost of doing so is lowered.”
It is worth noting that to complete the argument someone needs to show that America starting aggressive wars is bad. The people starting such wars, it turns out, have their reasons.
ETA: to tell the truth, until I dug up that last Wikipedia page just now for purposes of argument, I still had no clear idea how much this happened. And give these people autonomous killer robots? In the name of developing Friendly Intelligence?
That’s why. Folks will disagree that’s something that the US does, and pointing to things the US might have done decades ago won’t convince them. There’s no way to even debate this point without going down a potentially mind-killing rabbit hole, and I find it hard to believe you weren’t aware of this when you posted it.
In case you weren’t aware of it: I live in the US, and I’ve talked to a number of ordinary folks and a number of scholarly folks about it, and I don’t tend to encounter people who would grant that the US starts aggressive wars. You should be able to see why someone who thinks that would be angry and vocal about the accusation.
The difference between specialized FAI and general FAI is like the difference between adaptation executors and fitness maximizers. It’s a big difference.
Is specialized FAI even a meaningful term? ISTM that to implement actual friendliness even in a specialized application an AI needs capabilities that imply AGI.
It’s a nonstandard term that seemed appropriate to the discussion. By specialized FAI, I mean an AI that reliably does the thing it was made to do in a specific context.
Sounds like a good idea, but here are my reservations/warnings:
1) For the kind of work you describe, you would probably need a high-level security clearance and continued scrutiny on your life (to make sure you don’t share it with the wrong people), and you probably wouldn’t be able to publicly discuss your work. (i.e., where SIAI can hear it.)
2) What are your chances you’ll actually get to work on the aspect of the problem that relates to Friendliness?
The scrutiny isn’t so bad. They’re mainly looking for illegality or potential for corruption. And even if you’ve committed illegal acts, so long as you own up to it, and it wasn’t in the recent past (5 to 7 years), it’s generally OK. Felonies are a different matter, of course.
A secret clearance is an interview, taking fingerprints, interviews of family and friends, interviews of neighbors, a credit check, and will likely require drug testing. Top secret clearances and above lead to polygraphs and heavy grilling, with monitoring for new developments. They’re renewed every few years, going through the process again.
Most of the military drone programs would be given to one large contractor like Lockheed Martin or NGIT, with lots of smaller subcontractors. A security clearance at secret level or above takes up to 9 months, costs the company over $10,000, and adds that much or more to that person’s annual salary potential, so it’s not something they hand out lightly.
Most contracting agencies put a small, already-cleared team on the activities that require it, and farm out most of the work (documentation, mundane code, etc.) to people without clearances. If they need more people with clearances, they tend to get temporary waivers for the duration of the work (90 days or less, for example). Most only see a small part of the whole, and you don’t choose your projects; your company does.
These are not good environments to learn complex, high-level things like Friendliness.
It wasn’t so much the background scrutiny I’m worried about so much as,
“Alright, it’s been fun doing this research on human-level intelligent robots. Oh, hey, I’m going to go to an AI conference in Shanghai...” ″Hahahahahaha! Good one! Um … were you being serious?”
Yep. And so could the appearance on the internet of an e-book about “How to build a human-level armed android, by Warrigal”, when Warrigal has worked at such a job.
And if you go to a potentially hostile country without telling them … well, I guess you’ll get the option of a PMITA federal prison, or solitary.
No. FAI is about figuring out how to implement precise preference, not an approximation of it appropriate for non-magical environments. Requires completely different tools.
It seems that to work on FAI, one has to become mathematician and theoretical computer scientist (whatever the actual career).
I gave a link! A non-magical environment gives limited expressive power, so there are few surprising situations that given heuristics don’t capture. With enough testing and debugging, you may get your weakly intelligent robot to behave. Where more possibilities are open, you have to get preference exactly, or the decisions will be obviously wrong (see The Hidden Complexity of Wishes).
I have very little in the way of morality, but I personally draw the line at supporting the military industrial complex. I don’t think helping the military make robots that make kill decisions themselves has much to do with provable mathematical Friendliness.
It seems you are morally obliged to at least investigate possible mechanisms for tax evasion. But then, morality doesn’t have all that much to do with consequences.
One practical way for me to evade taxes is to start a startup and sell it, which means my income will be taxed at the much lower capital gains rate.
Also, I draw a distinction between something I am comfortable doing, and the likely future progress of society as a whole. Killer robots aren’t going away anytime soon, and except for the extra wars it will allow us to have, killer robots result in less US deaths and more effective military tactics than on the ground troops. I expect that US killer robots will be making kill decisions or at least very strong kill suggestions that are followed 99% of the time within 10 years. There’s just too much data coming in too fast for a single human operator to be able to process.
If the African totalitarians are still around in 25 years, the possibility of being conquered by an army of killer robots may make them more amenable to internationally monitored elections.
So good and bad things will come about as a result of the killer robot armies of the future. It’s really the military industrial complex as a whole I object to; robots making kill decisions is one of the less objectionable things within the military industrial complex.
One practical way for me to evade taxes is to start a startup and sell it, which means my income will be taxed at the much lower capital gains rate.
Uh, that’s a pretty dumb thing to say. For one, starting a startup and selling it has rather broader consequences than a typical tax avoidance strategy. That’s like suggesting moving to a third world country to cut down on your daily living expenses—your food and accommodation costs may indeed decrease but it significantly changes your life in all kinds of other ways as well. For another this would not be tax evasion but tax avoidance which has the rather significant difference of being entirely legal.
I’m fully aware of the distinction; I was playing with the ambiguous distinction between evasion and avoidance (as you say, the distinction being that avoidance is legal) by using the language of the person I replied to. I was trying to imply that there is no profound difference between avoidance and evasion, just the definitions given by the rule of law.
I assumed wedrifid knew the difference and was suggesting you were morally bound to evade rather than merely avoid taxes if you draw the line at supporting the military industrial complex. I don’t necessarily agree with that but I took that to be his point.
I would have thought that maximizing tax avoidance is something that any aspiring rationalist ought to be doing as a matter of course.
I was trying to imply that there is no profound difference between avoidance and evasion, just the definitions given by the rule of law.
The fact that you can go to jail for tax evasion seems like a pretty profound difference from tax avoidance to me. The whole tax structure is ‘just’ the definitions given by the rule of law.
I like living in a country with a government compared to Somalian anarchism, but not compared to libertarian utopia. This is getting close to politics.
I think the general consensus is that we tread carefully when straying into political territory and tend to avoid explicitly political (certainly party political) discussion but that we don’t entirely avoid discussion that has a political dimension. Taken to an extreme that would seem to preclude most topics of any interest or significance. Generally the standard of discourse is fairly high here and political slanging matches are avoided.
And I still don’t consider it a political point that you basically fail at instrumental rationality if you overpay on your taxes.
I don’t see the contradiction. The government creates the tax code with at least the stated intention of encouraging or subsidizing certain behaviours over others. That only works if people respond rationally to the incentives.
From the individual rationalist’s point of view one should aim to optimize one’s resources. In the context of taxes that generally means arranging your financial affairs to minimize the taxes paid without breaking the law. You can then choose how to best meet your own goals by allocating the money you save as you see fit.
It is only rational to not avoid taxes if you either believe the effort required to avoid them is not worth the money saved or if you believe that the optimal use of the money is to give it to the government. It seems unlikely in the latter case that the optimal amount to give to the government just happens to be the very amount they take from you so you should probably be voluntarily donating a larger portion of your income to the government. If you live in the US you should go here.
In the context of taxes that generally means arranging your financial affairs to minimize the taxes paid without breaking the law.
Since we were talking about choice of career among other things, it’s worth stating that your actual incentive here more closely resembles “maximizing your after-tax income” than “minimizing your taxes paid”.
True, I was focusing slightly more narrowly on the idea of minimizing your tax burden at your current income level without making major changes in your career, country of residence, etc. but on a longer timescale or in the context of broader life goals you are aiming to maximize your after-tax income rather than minimize the taxes you pay.
I don’t think I’m morally bound to evade taxes for the same reason I’m not morally bound to stop the world’s massive amounts of animal suffering. My utility function breaks if I take my morality too seriously. As you say, I am somewhat bound morally to try and evade taxes or even actively stage insurrection against my government. Both of those seem like very bad ideas, as the state will just crush me.
Not working for the government in lieu of trying to bring down the government is similar to my decision to eat less meet rather than trying to make the whole world eat less meat. Yes, I am aware that these are not anywhere close to perfectly analogous decisions.
I’d say yes, go for it. The value would be in gaining experience in designing AI systems that have to work in the real world—a very different proposition from systems that only have to work in the laboratory or in the imagination.
Cognitive neuroscience and cognitive psychology are far more relevant. A Friendly AI is a moral agent; it’s more like a judge than a cruise missile. A killer robot must inflict harm appropriately but it does not need to know what “harm” is; that’s for politicians, generals, and other strategists.
We have to extract the part of the human cognitive algorithm which, on reflection, encodes the essence of rational and moral judgment and action. That’s the sort of achievement which FAI will require.
The problems involved in creating ethical military robots are vastly different from those involved in general AI. Ron Arkin’s Governing Lethal Behavior in Autonomous Robots does a good job of describing how one should think when building such a thing. Basically, there are rules for war, and the trick is to just implement those in the robot, and there’s very little judgement left over. To hear him explain it, it doesn’t even sound like a very hard problem.
To hear him explain it, it doesn’t even sound like a very hard problem.
Then I’m not sure he understands the problem. How does the robot tell the difference between an enemy soldier and a noncombatant? When they’re surrendering? When they’re dead/severely wounded?
The rules of war themselves are fairly algorithmic, but applying them is a different story.
Well there’s a bit of bracketing at work here. Distinguishing between an enemy soldier and a noncombatant isn’t an ethical problem. He does note that determining when a soldier is surrendering is difficult, and points out the places where there really is an ethical difficulty (for example, someone who surrenders and then seems to be aggressive).
This is a good question, I would appreciate more discussion of it on LW. I am wondering about similar issues: my research involves computer vision, the most obvious applications of which are for surveillance and security. One does not need to be a science fiction author or devotee to imagine powerful computer vision tools or military robots being used for evil.
Whether something can be used for evil or not is the wrong question. It’s better to ask “How much does computer vision decrease the cost of evil?” Many of the bad things that could be done with CV can be done with a camera, a fast network connection, and an airman in Nevada, just as many of the good medical applications can be done by a patient postdoc or technician.
Better still is to ask, “What are the benefits and harms of doing this rather than something else, including cascading consequences on to the indefinite future?” Which, of course, is murderously hard to answer in cases this far removed from direct consequences.
Which is what I meant when I said computer vision research was not distinguished. Although upon consideration I would weaken the claim to “not strongly distinguished”, which might still be enough to justify doing something else.
One career path I’m sort of musing about is working to create military robots. After all, the goals in designing a military robot are similar to those in designing Friendly AI: the robot must know somehow who it’s okay to harm and what “harm” is.
Does this seem like a good sort of career path for someone interested in Friendly AI?
If you work on AGI and you make actual progress, then you have a moral obligation to keep it away from people who can’t be trusted with it. You cannot satisfy this obligation while working for a military or a military contractor.
I’m not an expert, but I don’t think there is much more overlap with FAI than other domain AI projects have. The problems for military robots probably are more of the machine vision kind than of the meta-ethics kind.
Am I the only one to think that no, creating military robots isn’t a “good career path” towards friendly AI, because creating military robots is inherently unfriendly to humanity? Especially if you live in the US and know that your robots will be used in aggressive wars against poorer countries. It’s some kind of crazy ethical blindness that most Americans seem to have for some reason, where “our guys” are human beings, but arbitrarily chosen foreigners deserve whatever they get… Just like this incident I saw on HN when one guy asked about career prospects working for the occupation force in Iraq, and another answered that it’ll be an “amazing and unique experience”. You’ll note my reply there was much more concise.
Fixed it for you.
And the reason is evolved psychological instincts with pretty obvious selection benefits.
I don’t think that’s an accurate correction. Because America is the current hegemonic power Americans can get away with feeling that other nations aren’t “real” in the sense the USA are. For example when considering some hypothetical situation that would concern the whole planet an American might only consider how the USA would react, while anyone else in the same situation would in addition to the reaction of their own nation at the very leasts also have to consider how the USA reacts, and might even consider other nations since their situation is more obviously symmetrical to their own.
I’m afraid I don’t know what this means.
There might be pragmatic realities that force non-Americans to consider the reactions of foreigners more than Americans must. Americans have two oceans and the world’s strongest military to keep a lot foreign troubles far away, other people do not. But this isn’t evidence that Americans care less about foreigners than those from other countries do. It sounds like you’re talking about a political blindness instead of an ethical blindness. Besides, there is equally good reason to think America’s hegemonic status makes Americans more worried about foreign goings-on since American lives and American business concerns are more often at stake.
Not “real” is the best description I have. You could say having the same sort of attitude towards other nations you might have towards Oz, Middle Earth or the Empire from Star Wars even though you intellectually know that they really exist, but that only comes close to what I mean. I must stress that not all Americans have this attitude, but some seem to do, and thats enough to influence the discourse.
I was thinking more of e. g. first contact situations in SF stories and things like that, not necessarily normal international politics, but I think it extends to all fields: Domestic politics (the amount and the kind of consideration the fact that a policy seems to work well somewhere else gets), pop culture, sports, science, language learning, wherever one might consider other nations Americans have more leeway not to do so. This doesn’t by necessity have to extend to ethical considerations, but when cousin_it observes that it appears to it seems inappropriate to me to “correct” that out.
Exactly zero evidence has been presented that Americans have this ill-defined attitude at a higher rate that non-Americans.
No reason given to think this is the case on balance.
The obvious and straight forward interpretation of cousin it’s comment was that he was referring to American nationalism. A real and quite common phenomenon in which Americans don’t give a lick about people who don’t live their country (in civilized places this is referred to as racism). I’ve met plenty of people with this view. It is a disgusting and immoral attitude. That said, it is a near ubiquitous attitude. Humans have been killing humans from other groups and not giving a shit for as long as there have been humans. We’re good at it. Really good. We do it like it’s our job. In no way is this unique to residents or citizens of the United States of America. If cousin_it meant something else he can clarify. He’s been commenting elsewhere throughout this conversation anyway.
(Not my downvote, btw)
Yes! Thank you! Finally, a human user says what I’ve been trying to say all along! (See for example here.)
On my first visit to Earth (or perhaps the first visit of one of my copies before a reconciliation), my reaction was (translated from the language of my logs):
“The Alpha species [i.e. humans] inflicts disutility on its members based on relative skin redness. I’m silver. Exit!”
While all what you say about nationalism is true It’s not obvious to me that it explains what cousin_it was talking about, at least not to its full extent. Degradation of other people through nationalism usually evokes hate (“those damned X!”), while the linked comment seemed too cheerful for that, it’s not like it encouraged to “help show it to those stinkin’ Arabs” or anything like that. As if the fact that someone might be hurt simply didn’t occur to them. There has been plenty of that in other historical cases of nationalism, but I think usually only in similarly asymmetrical situations. Nationalism in symmetrical situations seems to be of the plain hate kind.
Nationalism almost always displays as willful ignorance or apathy about the condition of those outside the nation. It’s nation-centrism, in other words. Hatred is an extreme case (thus the moniker “ultra-nationalism”).
This just isn’t true. At all. I’m not even sure where you would get it. There are nationalists all around the world who do not express hate toward other nations, even in cases of power symmetries.
More importantly: Why are we arguing about this? Cousin_it isn’t some old philosopher or public intellectual who we can’t reach for clarification. If he wants to correct my understanding of his comment let him do it.
Sorry for taking so much time to reply. FAWS is right, I’m not saying Americans hate foreigners. It’s more like a blindness or deafness. See my link above to the “amazing and unique experience” guy. The ethical angle of the situation simply doesn’t occur to him, it’s as if Iraqis were videogame characters. America’s fighting an aggressive war and killed umpteen thousand people?… uh, okay man, I got a career to advance and I wanna go someplace exotic, like expand my horizons and shit. I’ve never heard anything like that from Russians or anyone else except Americans, though I’d be the first to agree that we Russians are quite nationalistic.
The original disagreement wasn’t about the term nationalism (and I never claimed that nationalism didn’t explain it, only that what you said about nationalism up to that point didn’t), so you seem to be arguing my point here: For the reasons I described it’s easier for Americans to be “ignorant about the condition of those outside the nation”.
You can’t keep hurting someone and not even notice you do in a symmetrical conflict because they will hurt you back, and then you will want revenge in turn.
You seem to be of the opinion that you can’t even coherently/rationally (?) think a certain thing and I disagree. That disagreement is independent of the question whether anyone had actually been thinking that.
EDIT: Nation-centrism is close to what I meant with not feeling that other nations are “real”.
“willful” ignorance… Do we really need to spend time distinguishing nationalism from the fact that the US gets the NBA?
So what you want to claim is that asymmetrical conflict is more likely than symetrical conflict to lead to people in one country being ignorant of the animosity against them in the other country. This is plausible though several counterexamples come to mind and I’m not sure it applies since a large portion of American nationalists appear to conceive of the conflict as a symmetrical one (this has been a minor issue in American politics, of course). I’m not sure I see how this issue relates to nationalism exactly and what it’s relevance is. But as you can see below I’m not sure I understand what you’re claiming at this point.
WHAA? This is incredibly vague and confusing. I honestly have no idea what you’re talking about.
And the fact that you neither need to make any significant sacrifices nor engage in double-think doesn’t make willful ignorance easier?
Not really. The term nationalism is unhelpful. There seem to be at least two kinds, the we’re-great-don’t-care-about-anyone-else nation-centric one, and unite-against-the-enemy-us-or-them kind. My point is that being a hegemonic power facilitates the nation-centric kind. The sub-point that a hot symmetric conflict turns nationalism into the second kind pretty much by necessity even if it started out as the first kind. An asymmetric conflict of course allows either kind in the stronger party, presumably that’s what your counter-examples show.
Presumably you detected a feature that made the post knowably correctable. If that feature wasn’t an incoherent or irrational (in light of further evidence you have available) opinion, what was it?
That sounds like nationalism rather than racism to me. The country you live in has only a loose correlation with the colour of your skin. If people favoured countries which had a strong majority of people of a particular ethnicity that might be evidence for racism.
I was speaking loosely in the parenthetical. Nationalism has a strong tendency to manifest as racism and racism has a similar tendency to manifest as nationalism. They’re highly correlated but yes, conceptually distinct.
Because I thought it would be obvious enough. Americans are less likely to learn foreign languages, most Americans don’t even have a passport, it’s easier to write a science paper without referencing any non-American research (not that I think this done at a significant rate, but the equivalent would be unthinkable elsewhere), foreign movies are generally either ignored or remade (and set in the USA if possible), foreign trade is a smaller percentage of GDP than just about any other developed nation, it’s possible to “buy American” for a greater range of products than the equivalent anywhere else, America has the top leagues for the sports it cares about (it’s not just that America cares for different sports than the rest of the world, for almost all countries the top level of the sport that country cares most about is at least in part played elsewhere so a soccer fan in e. g. Romania has to pay attention to the English Premier League, the Spanish Premiera Divison etc. [and even the English and Spanish fans have incentive to pay attention to each others league because they are at roughly equal level and the top teams regularly play each other]. If America cared about soccer the top league would be there so Americans still wouldn’t have any reason to pay attention to foreign sports).
I think most of those things could be expected regardless of whether America has any such putative hegemonic status. Most Americans don’t have passports because they can’t afford to travel to another continent, and the number is rising now that passports are required to visit other countries in North America. Getting a passport in the US is a fairly annoying, expensive process, so I’m not surprised most people haven’t bothered. Ditto with the foreign languages—most Americans don’t meet or talk to people who don’t speak American.
I haven’t been able to find a source online—do most Chinese people speak foreign languages and have passports? Are they required?
Getting a passport is a bother everywhere, the point is that Americans don’t really need a passport because their country is huge, rich and powerful and they can take a vacation in whatever climate they like without ever leaving their borders. People in other developed nations would have to make much greater sacrifices to never travel abroad.
That’s exactly my point! They can do that without missing all that much, unlike most of the planet.
IIRC compulsory foreign language instruction (mostly in English) starts in third grade, and many educated Chinese learn a third/fourth language later. For many Chinese Mandarin is effectively a L2 language so they know their native dialect, Mandarin and some English. The state of English learning is mostly horrible and only a minority can communicate effectively, but I’d think that Chinese on average speak better English than non-native-speaker Americans speak Spanish and the difficulty is much greater.
I’m not all that clear about the passport situation/foreign travel and China is a bad example anyway because it is itself an enormous country and very “nation-centric”, but a huge number of Chinese study abroad, while there is no comparable reason for Americans to do so because they already have many of the most prestigious universities.
Again, why the down-vote? Is there any factual error or is giving evidence when asked not welcome here?
Why was this voted down? Was there anything in this post that isn’t either objectively true (Americans have more leeway to ignore other nations) or clearly marked as speculation (“seem to”)? Is it inherently irrational to consider the hypothesis that cousin_it’s observation was meant exactly as stated, and then to speculate about what might be behind this observation?
“War is bad, the military industrial complex is evil,” sounds good, and it hits all the right emotional buttons (care for humanity, etc.), but it is not necessarily true when all of the costs and benefits are taken into account. A defensive military allows intellectual, cultural, economic, and artistic endeavors to flourish without fear of attack. Destruction of infrastructure can open the way for rebuilding into a far better environment, and massive war spending can push the boundaries of technology. Reshaping political landscapes can cause huge culture shifts through decades which may result in much more open, and better, societies.
Suffering is terrible; death is abhorrent; and the benefits are uncertain enough, they should not be used as arguments to start an otherwise preventable war. But I do not see how we can appropriately judge the complex results of “war in general” on the timeline of decades or centuries.
What I can certainly agree with is that contributing to the military is bad on the margins, since it’s already getting more than its share of resources thanks to others of a more bloodthirsty bent.
At this point I laughed with a kind of sad laugh. Everyone who thinks America will use military robots for self-defense, raise your hands! On the other hand, you’ve made a wonderful argument that a strong offensive US military stifles cultural/economic/artistic endeavours worldwide due to fear of attack, though I’m sure you didn’t mean to.
They will use them for defense as well as for offense. I’ve seen several articles already of American cities ready to purchase military drones for law enforcement purposes, and I would be very surprised if they were not also added to strategic military bases within America to defend against potential attackers. At the very least, when countries are making strategy decisions that may involve the military, the mere existence of drones will serve as a deterrence.
My point was to state the necessity of defense. If there are strong, warlike countries with military drones, such as the United States, then other countries had better start developing countermeasures to protect themselves. That, or ally themselves with the strong country in the hopes of falling under their protection rather than their ire. As such, staying ahead of the other countries is a valid strategy.
And I would certainly agree that US aggressiveness is stifling those very things in Iraq, Afghanistan, Iran, etc. The word ‘fear’ was poorly chosen. I was thinking more of what happened to Tibet and all those pacifists when they failed to muster an appropriate military defense: actual invasion and displacement or destruction.
-- Jack Handey’s Deep Thoughts
Oddly I don’t seem to have a reference handy, but several US cities already use robots in law enforcement. iRobot and Foster-Miller really took off after the success of their robot volunteers at the WTC.
How much harm do you contribute by working to enable military robots?
How much harm do you contribute by paying taxes to the US government, part of which are used to fund military robots?
How much harm do you contribute by existing, living in the US, and absorbing a huge amount of electricity and other natural resources?
Well, that was voted down pretty rapidly :)
However, I was being honest with my questions. I’d like to know what sort of utilon adjustments people assign to these different situations, even if it’s just a general weighting like ‘high’ or ‘low’.
My decision to not work for the military industrial complex is all about fuzzies, not utilons.
It can be useful to separate ‘fuzzies’ from ‘practical benefit’ but they can both be considered sources of utilons.
As I see it, it’s less about how much harm those specific things do, and more about how viable the alternatives are. I expect that all governments makes tax avoidance/evasion difficult, and I suspect that paying taxes to any government will support a military. The lifestyle changes involved in actually living sustainably (as opposed to being ‘slightly better than the US average’ or applying greenwash) seem pretty significant and possibly unattainable for most of us, as well. (I could be wrong on the latter in a general sense; I haven’t looked into it, since I’m already relatively sure that it’s beyond what I, personally, could manage.) Given that Warrigal was asking about the career move, though, I expect that he does have other viable options that could be pursued without completely turning his life upside down, and that’s a significant difference between this decision and the other two.
Costa Rica’s constitution forbids a military, and they seem to mean it, though one can quibble about whether their police count.
http://en.wikipedia.org/wiki/Military_of_Costa_Rica
How viable, given that you want to live in relative comfort and ease. But if a true valuation is made, then perhaps that should not be taken as given, considering the costs.
I have not assigned numbers—it is not a simple question.
I live in Russia and have refused numerous invitations to migrate to the US.
There are various arguments that building military robots is bad, but I don’t think you’ve touched on any good ones. When you look at how unreliable human soldiers are on the field, creating military robots just seems like an obvious way to make things better for everyone involved. Fewer American casualties because we’re using robots, and fewer civilian casualties because the robots are better at not shooting at civilians.
Also, FWIW, most military robots currently aren’t the sort that shoot people—they do things like look around corners, draw fire, perform aerial surveillance, and detect/defuse bombs.
This is ironic. I wrote:
Then you wrote:
This happens to pixel-perfectly demonstrate my point about ethical blindness. Reread my quote again, then your quote, then mine, then yours again. Notice anything wrong? Anything missing?
You see, you omitted one pretty important group: everyone America calls “enemy combatants”. If you think all of them are bad people and deserve to die, then you obviously don’t get it. Repeat after me: America Starts Aggressive Wars. Then say it again because it’s true and truth won’t suffer from repetition. Say it as many times as you need to make it sink in, then come back and we will resume this discussion.
America will be killing those people with or without robots. We already have ways of wiping all of the enemy combatants off the map if we want to (for example nukes). Military technology is primarily about finding ways to 1) kill fewer of our own soldiers and 2) kill fewer people who aren’t enemy combatants.
Not necessarily. All else equal, the less it costs to wage a war (in money, American lives, and good will), the more more likely leaders are to actually start one.
Ignoring the question whether that’s desirable or not (politics is the mindkiller) reducing the cost of killing those people will lead to more of those people killed in marginal situations where such considerations matter.
Yes, that’s one of the good arguments against robot soliders I mentioned above. We’re more likely to not care about the fate of our robot soliders, and so would be less hesitant to send them into battle. Though it’s still an open question whether that effect would trump any increased monetary cost per soldier (if any) and whether the other benefits outweigh such concerns.
Human soldiers perform horribly in terms of following the rules of war, and above that do absolutely horrible things sometimes.
Also, this is definitely not the place to debate this, and you have to know a lot of people won’t agree with you, so stop with the flamebait.
You don’t even have to go as far as “America Starts Aggressive Wars”—“Under the right conditions, America is capable of starting aggressive wars, and is more likely to do so if the cost of doing so is lowered.”
Look, I get the “Politics is the Mind Killer” mantra, and I agree that it would be fruitless to start a debate about something like abortion here—it comes down to definitions and conventions about what is moral.
But when something is actually, demonstrably, true, refusing to look at and examine the truth because it is painful to do so is not compelling. It doesn’t even trigger most of the reasons in “politics is the mindkiller”—both major U.S. Political parties are just fine with most of the examples. The only two teams that can credibly be put in opposition here are “U.S.A.” and “Everyone else”.
It is worth noting that to complete the argument someone needs to show that America starting aggressive wars is bad. The people starting such wars, it turns out, have their reasons.
[half-ironic] Yep. Some countries are just in desperate need a good ol’ fashioned ass-kicking. [/half-ironic]
Why flamebait? I stated a very well-known fact.
http://en.wikipedia.org/wiki/Bay_of_Pigs_Invasion
http://en.wikipedia.org/wiki/Operation_Power_Pack
http://en.wikipedia.org/wiki/Operation_Urgent_Fury
http://en.wikipedia.org/wiki/Operation_Just_Cause
More here: http://en.wikipedia.org/wiki/CIA_sponsored_regime_change
ETA: to tell the truth, until I dug up that last Wikipedia page just now for purposes of argument, I still had no clear idea how much this happened. And give these people autonomous killer robots? In the name of developing Friendly Intelligence?
1) Politics is the mind killer, 2) Agree denotationally but not connotationally
Bay of Pigs? Really? How about nailing us on the Philippines while you’re at it. :-)
It isn’t like there aren’t recent examples to choose from.
That’s why. Folks will disagree that’s something that the US does, and pointing to things the US might have done decades ago won’t convince them. There’s no way to even debate this point without going down a potentially mind-killing rabbit hole, and I find it hard to believe you weren’t aware of this when you posted it.
In case you weren’t aware of it: I live in the US, and I’ve talked to a number of ordinary folks and a number of scholarly folks about it, and I don’t tend to encounter people who would grant that the US starts aggressive wars. You should be able to see why someone who thinks that would be angry and vocal about the accusation.
Ooh… I thought we were having a factual disagreement. I apologize. Maybe this won’t work as flamebait here :-)
Creating military robots can be friendly, if:
Lbh fryy gur ebobgf gb nyy fvqrf, ercynpvat uhzna nezvrf, naq unir gurz evttrq gb abg npghnyyl svtug rnpu bgure, ohg vafgrnq gnxr njnl gur rssrpgvir cbjre bs gur tbireazragf gung jnagrq nyy gur jnef.
(Rot13)
Unfortunately, this isn’t a realistic option if you’re an employee at a big military contractor, which is the most likely scenario...
Well, yeah, there is no way someone at standard human level would pull off what happened in that story.
The difference between specialized FAI and general FAI is like the difference between adaptation executors and fitness maximizers. It’s a big difference.
Is specialized FAI even a meaningful term? ISTM that to implement actual friendliness even in a specialized application an AI needs capabilities that imply AGI.
It’s a nonstandard term that seemed appropriate to the discussion. By specialized FAI, I mean an AI that reliably does the thing it was made to do in a specific context.
Isn’t that the same as specialized AI? I don’t think anybody deliberately makes specialized AIs that don’t work.
Sounds like a good idea, but here are my reservations/warnings:
1) For the kind of work you describe, you would probably need a high-level security clearance and continued scrutiny on your life (to make sure you don’t share it with the wrong people), and you probably wouldn’t be able to publicly discuss your work. (i.e., where SIAI can hear it.)
2) What are your chances you’ll actually get to work on the aspect of the problem that relates to Friendliness?
The scrutiny isn’t so bad. They’re mainly looking for illegality or potential for corruption. And even if you’ve committed illegal acts, so long as you own up to it, and it wasn’t in the recent past (5 to 7 years), it’s generally OK. Felonies are a different matter, of course.
A secret clearance is an interview, taking fingerprints, interviews of family and friends, interviews of neighbors, a credit check, and will likely require drug testing. Top secret clearances and above lead to polygraphs and heavy grilling, with monitoring for new developments. They’re renewed every few years, going through the process again.
Most of the military drone programs would be given to one large contractor like Lockheed Martin or NGIT, with lots of smaller subcontractors. A security clearance at secret level or above takes up to 9 months, costs the company over $10,000, and adds that much or more to that person’s annual salary potential, so it’s not something they hand out lightly.
Most contracting agencies put a small, already-cleared team on the activities that require it, and farm out most of the work (documentation, mundane code, etc.) to people without clearances. If they need more people with clearances, they tend to get temporary waivers for the duration of the work (90 days or less, for example). Most only see a small part of the whole, and you don’t choose your projects; your company does.
These are not good environments to learn complex, high-level things like Friendliness.
It wasn’t so much the background scrutiny I’m worried about so much as,
“Alright, it’s been fun doing this research on human-level intelligent robots. Oh, hey, I’m going to go to an AI conference in Shanghai...”
″Hahahahahaha! Good one! Um … were you being serious?”
Yeah, that could get you in big trouble.
Yep. And so could the appearance on the internet of an e-book about “How to build a human-level armed android, by Warrigal”, when Warrigal has worked at such a job.
And if you go to a potentially hostile country without telling them … well, I guess you’ll get the option of a PMITA federal prison, or solitary.
No. FAI is about figuring out how to implement precise preference, not an approximation of it appropriate for non-magical environments. Requires completely different tools.
It seems that to work on FAI, one has to become mathematician and theoretical computer scientist (whatever the actual career).
What do you mean by “non-magical environments”?
I gave a link! A non-magical environment gives limited expressive power, so there are few surprising situations that given heuristics don’t capture. With enough testing and debugging, you may get your weakly intelligent robot to behave. Where more possibilities are open, you have to get preference exactly, or the decisions will be obviously wrong (see The Hidden Complexity of Wishes).
Your terminology was unclear but this definition is not—I would tend to call it an “organic” environment.
I have very little in the way of morality, but I personally draw the line at supporting the military industrial complex. I don’t think helping the military make robots that make kill decisions themselves has much to do with provable mathematical Friendliness.
It seems you are morally obliged to at least investigate possible mechanisms for tax evasion. But then, morality doesn’t have all that much to do with consequences.
One practical way for me to evade taxes is to start a startup and sell it, which means my income will be taxed at the much lower capital gains rate.
Also, I draw a distinction between something I am comfortable doing, and the likely future progress of society as a whole. Killer robots aren’t going away anytime soon, and except for the extra wars it will allow us to have, killer robots result in less US deaths and more effective military tactics than on the ground troops. I expect that US killer robots will be making kill decisions or at least very strong kill suggestions that are followed 99% of the time within 10 years. There’s just too much data coming in too fast for a single human operator to be able to process.
If the African totalitarians are still around in 25 years, the possibility of being conquered by an army of killer robots may make them more amenable to internationally monitored elections.
So good and bad things will come about as a result of the killer robot armies of the future. It’s really the military industrial complex as a whole I object to; robots making kill decisions is one of the less objectionable things within the military industrial complex.
Uh, that’s a pretty dumb thing to say. For one, starting a startup and selling it has rather broader consequences than a typical tax avoidance strategy. That’s like suggesting moving to a third world country to cut down on your daily living expenses—your food and accommodation costs may indeed decrease but it significantly changes your life in all kinds of other ways as well. For another this would not be tax evasion but tax avoidance which has the rather significant difference of being entirely legal.
I’m fully aware of the distinction; I was playing with the ambiguous distinction between evasion and avoidance (as you say, the distinction being that avoidance is legal) by using the language of the person I replied to. I was trying to imply that there is no profound difference between avoidance and evasion, just the definitions given by the rule of law.
I assumed wedrifid knew the difference and was suggesting you were morally bound to evade rather than merely avoid taxes if you draw the line at supporting the military industrial complex. I don’t necessarily agree with that but I took that to be his point.
I would have thought that maximizing tax avoidance is something that any aspiring rationalist ought to be doing as a matter of course.
The fact that you can go to jail for tax evasion seems like a pretty profound difference from tax avoidance to me. The whole tax structure is ‘just’ the definitions given by the rule of law.
I don’t particularly want to avoid taxes, either—I like living in a country with a government.
I like living in a country with a government compared to Somalian anarchism, but not compared to libertarian utopia. This is getting close to politics.
As good a reason as any to drop the subject of tax avoidance.
Yes, Less Wrong could use some sort of Godwin’s law analog, where a thread is declared dead or at least discouraged once it hits politics.
I think the general consensus is that we tread carefully when straying into political territory and tend to avoid explicitly political (certainly party political) discussion but that we don’t entirely avoid discussion that has a political dimension. Taken to an extreme that would seem to preclude most topics of any interest or significance. Generally the standard of discourse is fairly high here and political slanging matches are avoided.
And I still don’t consider it a political point that you basically fail at instrumental rationality if you overpay on your taxes.
I don’t see the contradiction. The government creates the tax code with at least the stated intention of encouraging or subsidizing certain behaviours over others. That only works if people respond rationally to the incentives.
From the individual rationalist’s point of view one should aim to optimize one’s resources. In the context of taxes that generally means arranging your financial affairs to minimize the taxes paid without breaking the law. You can then choose how to best meet your own goals by allocating the money you save as you see fit.
It is only rational to not avoid taxes if you either believe the effort required to avoid them is not worth the money saved or if you believe that the optimal use of the money is to give it to the government. It seems unlikely in the latter case that the optimal amount to give to the government just happens to be the very amount they take from you so you should probably be voluntarily donating a larger portion of your income to the government. If you live in the US you should go here.
Since we were talking about choice of career among other things, it’s worth stating that your actual incentive here more closely resembles “maximizing your after-tax income” than “minimizing your taxes paid”.
True, I was focusing slightly more narrowly on the idea of minimizing your tax burden at your current income level without making major changes in your career, country of residence, etc. but on a longer timescale or in the context of broader life goals you are aiming to maximize your after-tax income rather than minimize the taxes you pay.
I don’t think I’m morally bound to evade taxes for the same reason I’m not morally bound to stop the world’s massive amounts of animal suffering. My utility function breaks if I take my morality too seriously. As you say, I am somewhat bound morally to try and evade taxes or even actively stage insurrection against my government. Both of those seem like very bad ideas, as the state will just crush me.
Not working for the government in lieu of trying to bring down the government is similar to my decision to eat less meet rather than trying to make the whole world eat less meat. Yes, I am aware that these are not anywhere close to perfectly analogous decisions.
I’d say yes, go for it. The value would be in gaining experience in designing AI systems that have to work in the real world—a very different proposition from systems that only have to work in the laboratory or in the imagination.
Cognitive neuroscience and cognitive psychology are far more relevant. A Friendly AI is a moral agent; it’s more like a judge than a cruise missile. A killer robot must inflict harm appropriately but it does not need to know what “harm” is; that’s for politicians, generals, and other strategists.
We have to extract the part of the human cognitive algorithm which, on reflection, encodes the essence of rational and moral judgment and action. That’s the sort of achievement which FAI will require.
The problems involved in creating ethical military robots are vastly different from those involved in general AI. Ron Arkin’s Governing Lethal Behavior in Autonomous Robots does a good job of describing how one should think when building such a thing. Basically, there are rules for war, and the trick is to just implement those in the robot, and there’s very little judgement left over. To hear him explain it, it doesn’t even sound like a very hard problem.
Then I’m not sure he understands the problem. How does the robot tell the difference between an enemy soldier and a noncombatant? When they’re surrendering? When they’re dead/severely wounded?
The rules of war themselves are fairly algorithmic, but applying them is a different story.
Well there’s a bit of bracketing at work here. Distinguishing between an enemy soldier and a noncombatant isn’t an ethical problem. He does note that determining when a soldier is surrendering is difficult, and points out the places where there really is an ethical difficulty (for example, someone who surrenders and then seems to be aggressive).
This is a good question, I would appreciate more discussion of it on LW. I am wondering about similar issues: my research involves computer vision, the most obvious applications of which are for surveillance and security. One does not need to be a science fiction author or devotee to imagine powerful computer vision tools or military robots being used for evil.
Whether something can be used for evil or not is the wrong question. It’s better to ask “How much does computer vision decrease the cost of evil?” Many of the bad things that could be done with CV can be done with a camera, a fast network connection, and an airman in Nevada, just as many of the good medical applications can be done by a patient postdoc or technician.
Better still is to ask, “What are the benefits and harms of doing this rather than something else, including cascading consequences on to the indefinite future?” Which, of course, is murderously hard to answer in cases this far removed from direct consequences.
Which is what I meant when I said computer vision research was not distinguished. Although upon consideration I would weaken the claim to “not strongly distinguished”, which might still be enough to justify doing something else.
People can use anything for evil if they want—I don’t see how computer vision is distinguished on that metric.
You just succumbed to the fallacy of gray. Computer vision is more easily used for evil than e.g. water purification technology.
Fair enough.