There is a continuum on this scale. Is there a hard cutoff, or is there any scaling? And what about very similar forks of AIs?
Luke_A_Somers
I’ll go along with that.
So, how do you characterize ‘Merkelterrorists’ and ‘crimmigrants’? Terms of reasonable discourse?
Your certainty that I am lying and blindly partisan appears to be much stronger than justifiable given the evidence publicly available, and from my point of view where I at least know that I am not lying… well, it makes your oh-so-clever insinuation fall a touch flat. As for being blindly partisan, what gives you the impression that I would tolerate this from the other side?
At the very least, I think this chain has shown that LessWrong is not a left-side echo chamber as Thomas has claimed above.
Except that risk is not in fact exaggerated
If so, the original expression of that risk was presented in such a fashion as to make that claim as non-credible as possible through explicit emotionally enflaming wording.
It’s possible to talk about politics without explicitly invoking Boo lights like ‘crimmigrants’ and appeals to exaggerated risks like ‘may rob/rape/kill you anytime of day or night’. You can have a reasonable discussion of the problems of immigration, but this is not how you do it. Anyone who says this is A-OK argumentation and that calling it out is wrong is basically diametrically opposed to Lesswrong’s core concepts.
Basically, you’re accusing me of outright lying that I think that argument is quite badly written, and instead being blindly partisan. It was badly written, and I am not. I don’t even know WHAT to do about the problems arising from the rapid immigration from the Middle-East into Europe. I certainly don’t deny they exist. What I DO know is that talking about it like that does (ETA: not) help us approach the truth of the matter.
Spreading this shitty argumentation in a place that had otherwise been quite clean, that’s what’s gotten under my skin.
This is utterly LUDICROUS.
Look at what happened. tukabel wrote a post of rambling, grammar-impaired, hysteria-mongering hyperbole: ‘invading millions of crimmigrants that may rob/rape/kill you anytime day or night’.This is utterly unquestionably NOT a rationally presented point on politics, and it does not belong on this forum, and it deserves to be downvoted into oblivion.
Stuart said he wished to be able to downvote it.
Then out of nowhere you come in and blame him personally or starting something he manifestly didn’t start. It’s a 100% false comment.
Upon being called out on this, you backtrack and say your earlier point didn’t actually matter (meaning it was bullshit to begin with), complaining that he’s gasp liberal.
But here it didn’t take being liberal to want to downvote. If I agreed 100% with tukabel, I would be freaking EMBARRASSED to have that argument presented on my side. It was a really bad comment!
The main difference I see with nuclear weapons is that if neither side pursues them then you end up in much the same place as if it’s very close, except that you have spent a lot on it.
While on AI, the benefits would be huge unless the failure is equally drastic.
Seems to me like Daniel started it.
This seems to be more about human development than AI alignment. The non-parallels between these two situations all seem very pertinent.
What would a natural choice of 0 be on that log? I would nominate bare subsistence income, but then any person having less than that would completely wreck the whole thing.
Maybe switch to inverse hyperbolic sine of income over bare subsistence income?
Quite—I have a 10 year old car and haven’t had to do anything more drastic than change the battery—regular maintenance kinds of stuff.
This is about keeping the AI safe from being altered by bad actors before it becomes massively powerful. It is not an attempt at a Control Problem solution. It could still be useful.
A) the audit notion ties into having our feedback cycles nice and tight, which we all like here.
B) This would be a little more interesting if he linked to his advance predictions on the war so we could compare how he did. And of course if he had posted a bunch of other predictions so we could see how he did on those (to avoid cherry-picking). That would rule out rear-view-mirror effects.
Really? There seems a little overlap to me, but plenty of mismatch as well. Like, MM says Bayesians are on crack, as one of the main points of the article.
Agreed on that last point particularly. Especially since, if they want similar enough things, they could easily cooperate without trade.
Like if two AIs supported Alice in her role as Queen of Examplestan, they would probably figure that quibbling with each other over whether Bob the gardener should have one or two buttons undone (just on the basis of fashion, not due to larger consequences) is not a good use of their time.
Also, the utility functions can differ as much as you want on matters aren’t going to come up. Like, Agents A and B disagree on how awful many bad things are. Both agree that they are all really quite bad and all effort should be put forth to prevent them.
An American Rationalist subculture question, perhaps. Certainly NOT America as a whole.
You say all excuses are equally valid and then turn around and say they’re more or less valid. Do you mean that excuses people would normally think of making have a largely overlapping range of possible validities?
Well, if the laws of the universe were such that it were unlikely but not impossible for life to form, MWI would take care of the rest, yes.
BUT, if you combine MWI with something that sets the force laws and particle zoo of the later universe as an aspect of quantum state, then MWI helps a lot—instead of getting only one, it makes ALL† of those laws real.
† or in case of precise interference that completely forces certain sets of laws to have a perfectly zero component, nearly all. Or if half of them end up having a precisely zero component due to some symmetry, then, the other half of these rule-sets… etc. Considering the high-dimensional messiness of these proto-universe-theories, large swaths being nodal (having zero wavefunction) seems unlikely.
A) what cousin_it said.
B) consider, then, successively more and more severely mentally nonfunctioning humans. There is some level of incapability at which we stop caring (e.g. head crushed), and I would be somewhat surprised at a choice of values that put a 100% abrupt turn-on at some threshold; and if it did, I expect some human could be found or made that would flicker across that boundary regularly.