I’m mainstream, you guys are fringe, do you understand? I am informing you that you are not only not convincing, but look like complete clowns who don’t know big O from a letter of alphabet. I know you want to do better than this. And I know some of people here have technical knowledge.
Dmytry
A belief propagation graph
What I am certain of is that your provided argument does not support, or even strongly imply your stated thesis.
I know this. I am not making argument here (or actually, trying not to). I’m stating my opinion, primarily on presentation of the argument. If you want argument, you can e.g. see what Hansen has to say about foom. It is, deliberately this way. I am not some messiah hell bent on rescuing you from some wrongness (that would be crazy).
value states of the world instead of states of their minds
Easier said than done. Valuing state of the world is hard; you have to rely on senses.
Okay, then, you’re right: the manner of presentation of the AI risk issue on lesswrong somehow makes a software developer respond with incredibly bad and unsubstantiated objections.
Why when bunch of people get together, they don’t even try to evaluate the impression they make on 1 individual? (except very abstractly)
Precisely, thank you! I hate arguing such points. Just because you can say something in English does not make it an utility function in the mathematical sense. Furthermore, just because in English it sounds like modification of utility function, does not mean that it is mathematically a modification of utility function. Real-world intentionality seem to be a separate problem from making a system that would figure out how to solve problems (mathematically defined problems), and likely, a very hard problem (in the sense of being very difficult to mathematically define).
With all of them? How so?
If even widely read bloggers like EY don’t qualify to affect your opinions, it sounds as though you’re ignoring almost everyone.
I think you discarded one of conditionals. I read Bruce Schneier’s blog. Or Paul Graham’s. Furthermore, it is not about disagreement with the notion of AI risk. It’s about keeping the data non cherry picked, or less cherry picked.
Thanks. Glad you like it. I did put some work into it. I also have a habit of keeping epistemic hygiene by not generating a hypothesis first then cherry-picking examples in support of it later, but that gets a lot of flak outside scientific or engineering circles.
To someone that wants to personally exist for a long time, it becomes very relevant what part humans have in the future.
I think this is an awesome point I overlooked. That talk of future of mankind, that assigning of the moral values to future humans but zero to the AI itself… it does actually make a lot more sense in context of self preservation.
I didn’t go down any road for confirmation. I put your single testimony in a more realistic perspective. Not believing one person who seems to have a highly emotional agenda isn’t ‘cultish’, it’s just practical.
I think you grossly overestimate how much emotional agenda can disagreement with counterfactual people produce.
edit: botched the link.
Something that I forgot to mention, which tends to strike particularly wrong chord: assignation of zero moral value to AI’s experiences. The future humans whom may share very few moral values with me, are given nonzero moral utility. The AIs that start from human culture and use it as a starting point to develop something awesome and beautiful, are given zero weight. That is very worrying. When your morality is narrow, others can’t trust you. What if you were to assume I am philosophical zombie? What if I am not reflective enough for your taste? What if I am reflective in a very different way? (someone has suggested this as a possibility ) .
It’s not irrational, it’s just weak evidence.
Why is it necessarily weak? I found it very instrumentally useful to try to factor out the belief-propagation impacts of people with nothing clearly impressive to show. There is a small risk I miss some useful insights. There is much lower pollution with privileged hypotheses given wrong priors. I am a computationally bounded agent. I can’t process everything.
This another example of method of thinking I dislike—thinking by very loaded analogies, and implicit framing in terms of zero sum problem. We are stuck on a mud ball with severe resource competition. We are very biased to see everything as zero or negative sum game by default. One could easily imagine example where we expand slower than AI, and so our demands always are less than it’s charity which is set at constant percentage point. Someone else winning doesn’t imply you are losing.
It’s just that I don’t believe you folks really are this greedy for sake of mankind, or assume such linear utility functions. If we could just provide food, shelter, and reasonable protection of human beings from other human beings, for everyone a decade earlier, that, in my book, outgoes all the difference between immense riches and more immense riches sometime later. (edit: if the difference ever realizes itself; it may be that at any moment in time we are still ahead)
On top of that, if you fear WBEs self improving—don’t we lose ability to become WBEs, and become smarter, under the rule of friendly AI? Now, you have some perfect oracle in model of the AI, and it concludes that this is ok, but I do not have model of perfect oracle in AI and it is abundantly clear AI of any power can’t predict outcome of allowing WBE self improvement, especially under ethical constraints that forbid boxed emulation (and even if it would, there’s the immense amount of computational resources taken up by the FAI to do this). Once again, the typically selective avenue of thought, you don’t think each argument applied both to FAI and AI to make valid comparison. I do know that you already thought a lot about this issue (but i don’t think you thought straight; it is not formal mathematics where the inferences do not diverge from sense with the number of steps taken, it is fuzzy verbal reasoning where it unavoidably does). You jump right here on most favourable for you interpretation of what I think.
Also, I would hope that it would have a number of members with comparable or superior intellectual chops who would act as a check on any of Eliezer’s individual biases.
Not if there is self selection for coincidence of their biases with Eliezer’s. Even worse if the reasoning you outlined is employed to lower risk estimates.
e.g. the lone genius point basically amounts to ad hominem
But why it is irrational, exactly?
but empirically, people trying to do things seems to make it more likely that they get done.
As long as they don’t use this heuristic too hard to choose which path to take. If it can be shown that some non-explicitly-friendly AGI design is extremely safe, and the FAIs are a case of higher risk, but a chance at slightly better payoff. Are you sure that this is what has to be chosen?
The hyper-foom is the worst. The cherry picked filtration of what to advertise is also pretty bad.
EY founded it. Everyone else is self selected for joining (as you yourself explained), and represents extreme outliers as far as I can tell.
Quoting from
http://selfawaresystems.files.wordpress.com/2008/01/ai_drives_final.pdf
I had been thinking, could it be that respected computer vision expert indeed believes that the system will just emerge world intentionality? That’d be pretty odd. Then I see it is his definition of AI here, it already presumes robust implementation of world intentionality. Which is precisely what a tool like optimizing compiler lacks.
edit: and in advance of other objection: I know evolution can produce what ever argument demands. Evolution, however, is a very messy and inefficient process for making very messy and inefficient solutions to the problems nobody has ever even defined.