@Eliezer: I can’t imagine why I might have been amused at your belief that you are what a grown-up Eliezer Yudkowsky looks like.
No, but of course I wasn’t referring to similarity of physical appearance, nor do I characteristically comment at such a superficial level. Puhleease.
I don’t know if I’ve mentioned this publicly before, but as you’ve posted in this vein several times now, I’ll go ahead and say it:
functional self-similarity of agency extended from the ‘individual’ to groups
I believe that the difficult-to-understand, high-sounding ultra-abstract concepts you use with high frequency and in great volume, are fake. I don’t think you’re a poor explainer; I think you have nothing to say.
If I don’t give you as much respect as you think you deserve, no more explanation is needed than that, a conclusion I came to years ago.
Well that explains the ongoing appearance of disdain and dismissal. But my kids used to do something similar and then I was sometimes gratified to see shortly after an echo of my concepts in their own words.
Let me expand on my “fake” hint of a potential area of growth for your moral epistemology:
If you can accept that the concept of agency is inherent to any coherent meta-ethics, then we might proceed. But, you seem to preserve and protect a notion of agency that can’t be coherently modeled.
You continue to posit agency that exploits information at a level unavailable to the system, and wave it away with hopes of math that “you don’t yet have.” Examples are your post today that has “real self” somehow dominating lesser aspects of self as if quite independent systems, or with your “profound” but unmodelable interpretation of ishoukenmei which bears only a passing resemblance to the very realistic usage I learned while living in Japan.
You continue to speak (and apparently think) in terms of “goals”, even when such “goals” can’t be effectively specified in the uncertain context of a complex evolving future, and you don’t seem to consider the cybernetic or systems-theoretic reality that ultimately no system of interesting complexity, including humans, actually attains long-term goals so much as it simply tries to null out the difference between its (evolving) internal model and its perceptions of its present reality. All the intelligence is in the transform function effecting its step-wise actions. And that’s good enough, but never absolutely perfect. But the good enough that you can have is always preferable to the absolutely perfect that you can never have (unless you intend to maintain a fixed context.)
You posit certainty (e.g. friendliness) as an achievable goal, and use rigorous-sounding terms like “invariant goal” in regard to decision-making in an increasingly uncertain future, but blatantly and blithely ignore concerns addressed to you over the years by myself and others as to how you think that this can possibly work, given the ineluctable combinatorial explosion, and the fundamentally critically underspecified priors.
I realize it’s like a Pascal’s Wager for you, and I admire your contributions in a sense somewhat tangential to your own, but like an isolated machine intelligence of high processing power but lacking an environment of interaction of complexity similar to its own—eventually you run off at high speed exploring quite irrelevant reaches of possibility space.
As to my hint to you today, if you have a workable concept of agency, then you might profit from consideration of the functional self-similarity of agencies composed of agencies, and so on, self-similar with increasing scale, and how the emergent (yeah, I know you dismiss “emergence” too) dynamics will tend to be perceived as increasingly moral (from within the system, as each of us necessarily is) due to the multi-level selection and therefore alignment for “what works” (nulling out the proximal difference between their model and their perceived reality, wash, rinse, repeat) by agents each acting in their own interest within an ecology of competing interests.
Sheesh, I may be abstract, I may be a bit too out there to relate to easily, but I have a hard time with “fake.”
I meant to shake your tree a bit, in a friendly way, but not to knock you out of it. I’ve said repeatedly that I appreciate the work you do and even wish I could afford to do something similar. I’m a bit dismayed, however, by the obvious emotional response and meanness from someone who prides himself on sharpening the blade of his rationality by testing it against criticism.
@Eliezer: I can’t imagine why I might have been amused at your belief that you are what a grown-up Eliezer Yudkowsky looks like.
No, but of course I wasn’t referring to similarity of physical appearance, nor do I characteristically comment at such a superficial level. Puhleease.
I don’t know if I’ve mentioned this publicly before, but as you’ve posted in this vein several times now, I’ll go ahead and say it:
functional self-similarity of agency extended from the ‘individual’ to groups
I believe that the difficult-to-understand, high-sounding ultra-abstract concepts you use with high frequency and in great volume, are fake. I don’t think you’re a poor explainer; I think you have nothing to say.
If I don’t give you as much respect as you think you deserve, no more explanation is needed than that, a conclusion I came to years ago.
Well that explains the ongoing appearance of disdain and dismissal. But my kids used to do something similar and then I was sometimes gratified to see shortly after an echo of my concepts in their own words.
Let me expand on my “fake” hint of a potential area of growth for your moral epistemology:
If you can accept that the concept of agency is inherent to any coherent meta-ethics, then we might proceed. But, you seem to preserve and protect a notion of agency that can’t be coherently modeled.
You continue to posit agency that exploits information at a level unavailable to the system, and wave it away with hopes of math that “you don’t yet have.” Examples are your post today that has “real self” somehow dominating lesser aspects of self as if quite independent systems, or with your “profound” but unmodelable interpretation of ishoukenmei which bears only a passing resemblance to the very realistic usage I learned while living in Japan.
You continue to speak (and apparently think) in terms of “goals”, even when such “goals” can’t be effectively specified in the uncertain context of a complex evolving future, and you don’t seem to consider the cybernetic or systems-theoretic reality that ultimately no system of interesting complexity, including humans, actually attains long-term goals so much as it simply tries to null out the difference between its (evolving) internal model and its perceptions of its present reality. All the intelligence is in the transform function effecting its step-wise actions. And that’s good enough, but never absolutely perfect. But the good enough that you can have is always preferable to the absolutely perfect that you can never have (unless you intend to maintain a fixed context.)
You posit certainty (e.g. friendliness) as an achievable goal, and use rigorous-sounding terms like “invariant goal” in regard to decision-making in an increasingly uncertain future, but blatantly and blithely ignore concerns addressed to you over the years by myself and others as to how you think that this can possibly work, given the ineluctable combinatorial explosion, and the fundamentally critically underspecified priors.
I realize it’s like a Pascal’s Wager for you, and I admire your contributions in a sense somewhat tangential to your own, but like an isolated machine intelligence of high processing power but lacking an environment of interaction of complexity similar to its own—eventually you run off at high speed exploring quite irrelevant reaches of possibility space.
As to my hint to you today, if you have a workable concept of agency, then you might profit from consideration of the functional self-similarity of agencies composed of agencies, and so on, self-similar with increasing scale, and how the emergent (yeah, I know you dismiss “emergence” too) dynamics will tend to be perceived as increasingly moral (from within the system, as each of us necessarily is) due to the multi-level selection and therefore alignment for “what works” (nulling out the proximal difference between their model and their perceived reality, wash, rinse, repeat) by agents each acting in their own interest within an ecology of competing interests.
Sheesh, I may be abstract, I may be a bit too out there to relate to easily, but I have a hard time with “fake.”
I meant to shake your tree a bit, in a friendly way, but not to knock you out of it. I’ve said repeatedly that I appreciate the work you do and even wish I could afford to do something similar. I’m a bit dismayed, however, by the obvious emotional response and meanness from someone who prides himself on sharpening the blade of his rationality by testing it against criticism.