I should note that most of the organizations we are talking about (Alcor, ACS, CI) are non-profits.
Technologos
I didn’t mind the old one, but I do like the “sticky brains” label that we can use for this concept in the future.
Agreed—the trick is that being wrong “only once” is deceptive. I may be wrong more than once on a one-in-forty-million chance. But I may also be wrong zero times in 100 million tries, on a problem as frequent and well-understood as the lottery, and I’m hesitant to say that any reading problems I may have would bias the test toward more lucrative mistakes.
an unprecedented and unlikely phenomenon
Possible precedents: the Library of Alexandria and the Dark Ages.
Definitely. Eliezer reflects perhaps a maximum lower bound on the amount of intelligence necessary to pull that off.
Who said he was? If Eliezer can convince somebody to let him out of the box—for a financial loss no less—then certainly a transhuman AI can, right?
Sure, though the mechanism I was referring to is “it can convince its handler(s) to let it out of the box through some transhuman method(s).”
This is essentially the AI box experiment. Check out the link to see how even an AI that can only communicate with its handler(s) might be lethal without guaranteed Friendliness.
Indeed, I agree—I meant that it doesn’t matter what conclusions you hold as much as how you interact with people as you search for them.
I agree with Kevin that belief is insufficient for exclusion/rejection. Best I can tell, it’s not so much what you believe that matters here as what you say and do: if you sincerely seek to improve yourself and make this clear without hostility, you will be accepted no matter the gap (as you have found with this post and previous comments).
The difference between the beliefs Kevin cited lies in the effect they may have on the perspective from which you can contribute ideas. Jefferson’s deism had essentially no effect on his political and moral philosophizing (at least, his work could easily have been produced by an atheist). Pat Robertson’s religiosity has a great deal of effect on what he says and does, and that would cause a problem.
The fact that you wrote this post suggests you are in the former category, and I for one am glad you’re here.
To be clear, I wasn’t arguing against applying the outside view—just against the belief that the outside view gives AGI a prior/outside view expected chance of success of (effectively) zero. The outside view should incorporate the fact that some material number of technologies not originally anticipated or even conceived do indeed materialize: we expected flying cars, but we got the internet. Even a 5% chance of Singularity seems more in line with the outside view than the 0% claimed in the reference class article, no?
I agree with your comment on the previous post, incidentally, that the probability of the Singularity as conceived by any individual or even LW in general is low; the possible types of Singularity are so great that it would be rather shocking if we could get it right from our current perspective. Again, I was responding only to the assertion that the outside view shows no successes for the class of breakthroughs containing AGI/cryo/Singularity.
I should note too that the entirety of the quotation you ascribe to me is originally from Eliezer, as the omitted beginning of the quoted sentence indicates.
There is a difference between giving something negative utility and giving it decreasing marginal utility. It’s sufficient to give the AI exponents strictly between zero and one for all terms in a positive polynomial utility function, for instance. That would be effectively “inputting” the marginal utility of resources, given any current state of the world.
I propose a further hypothesis: high-status people have internalized Laws 4, 5, and 46 of the 48 Laws of Power, but especially Law 1: Never Outshine the Master.
After years of practice in switching between seeming competent relative to underlings and less so relative to superiors, they develop the ability to segregate audiences as you described.
Crime is down during the current recession. It’s possible that the shock simply hasn’t been strong enough, but it may be evidence nonetheless.
I think Hanson’s hypothesis was more about true catastrophes, though—if some catastrophe devastated civilization and we were thrown back into widespread starvation, people wouldn’t worry about morality.
I was connecting it to and agreeing with Zack M Davis’ thought about utilitarianism. Even with Roko’s utility function, if you have to choose between two lotteries over outcomes, you are still minimizing the expected number of rights violations. If you make your utility function lexicographic in rights, then once you’ve done the best you can with rights, you’re still a utilitarian in the usual sense within the class of choices that minimizes rights violations.
It does occur to me that I wasn’t objecting to the hypothetical existence of said function, only that rights aren’t especially useful if we give up on caring about them in any world where we cannot prevent literally all violations.
And if you cannot act such that 0 rights are violated? Your function would seem to suggest that you are indifferent between killing a dictator and committing the genocide he would have caused, since the number of rights violations is (arguably, of course) in both cases positive.
Perhaps I was simply unclear. Both my immediately prior comment and its grandparent were arguing only that there should be a nonzero expectation of a technological Singularity, even from a reference class standpoint.
The reference class of predictions about the Singularity can, as I showed in the grandparent, include a wide variety of predictions about major changes in the human condition. The complement or negation of that reference class is a class of predictions that things will remain largely the same, technologically.
Often, when people appear to be making an obviously wrong argument in this forum, it’s a matter of communication rather than massive logic failure.
I’d heard it re: the smoking bans implemented in Minneapolis; I’m not surprised that Canada takes an especially paternalist position on the matter.
Logarithmic utility functions are already risk-averse by virtue of their concavity. The expected value of a 50% chance of doubling or halving is a 25% gain.