“You need to understand this stuff.” Since you are curi or a cult follower, you assume that people need to learn everything from curi. But in fact I am quite aware that there is a lot of truth to what you say here about artificial intelligence. I have no need to learn that, or anything else, from curi. And many of your (or yours and curi’s) opinions are entirely false, like the idea that you have “disproved induction.”
entirelyuseless
though no doubt there are people here who will say I am just a sock-puppet of curi’s.
And by the way, even if I were wrong about you being curi or a cult member, you are definitely and absolutely just a sock-puppet of curi’s. That is true even if you are a separate person, since you created this account just to make this comment, and it makes no difference whether curi asked you to do that or if you did it because you care so much about his interests here. Either way, it makes you a sock-puppet, by definition.
What’s so special about this? If you’re wrong about religion you get to avoidably burn in hell too, in a more literal sense. That does not (and cannot) automatically change your mind about religion, or get you to invest years in the study of all possible religions, in case one of them happens to be true.
As Lumifer said, nothing. Even if I were wrong about that, your general position would still be wrong, and nothing in particular would follow.
I notice though that you did not deny the accusation, and most people would deny having a cult leader, which suggests that you are in fact curi. And if you are not, there is not much to be wrong about. Having a cult leader is a vague idea and does not have a “definitely yes” or “definitely no” answer, but your comment exactly matches everything I would want to call having a cult leader.
“He is by far the best thinker I have ever encountered. ”
That is either because you are curi, and incapable of noticing someone more intelligent than yourself, or because curi is your cult leader.
I haven’t really finished thinking about this yet but it seems to me it might have important consequences. For example, the AI risk argument sometimes takes it for granted that an AI must have some goal, and then basically argues that maximizing a goal will cause problems (which it would, in general.) But using the above model suggests something different might happen, not only with humans but also with AIs. That is, at some point an AI will realize that if it expects to do A, it will do A, and if it expects to do B, it will do B. But it won’t have any particular goal in mind, and the only way it will be able to choose a goal will be thinking about “what would be a good way to make sense of what I am doing?”
This is something that happens to humans with a lot of uncertainty: you have no idea what goal you “should” be seeking, because really you didn’t have a goal in the first place. If the same thing happens to an AI, it will likely seem even more undermotivated than humans do, because we have at least vague and indefinite goals that were set by evolution. The AI on the other hand will just have whatever it happened to be doing up until it came to that realization to make sense of itself.
This suggests the orthogonality thesis might be true, but in a weird way. Not that “you can make an AI that seeks any given goal,” but that “Any AI at all can seek any goal at all, given the right context.” Certainly humans can; you can convince them to do any random thing, in the right context. In a similar way, you might be able to make a paperclipper simply by asking it what actions would make the most paperclips, and doing those things. Then when it realizes that different answers will cause different effects, it will just say to itself, “Up to now, everything I’ve done has tended to make paperclips. So it makes sense to assume that I will always maximize paperclips,” and then it will be a paperclipper. But on the other hand if you never use your AI for any particular goal, but just play around with it, it will not be able to make sense of itself in terms of any particular goal besides playing around. So both evil AIs and non-evil AIs might be pretty easy to make (much like with humans.)
The right answer is maybe they won’t. The point is that it is not up to you to fix them. You have been acting like a Jehovah’s Witness at the door, except substantially more bothersome. Stop.
And besides, you aren’t right anyway.
I think we should use “agent” to mean “something that determines what it does by expecting that it will do that thing,” rather than “something that aims at a goal.” This explains why we don’t have exact goals, but also why we “kind of” have goals: because our actions look like they are directed to goals, so that makes “I am seeking this goal” a good way to figure out what we are going to do, that is, a good way to determine what to expect ourselves to do, which makes us do it.
unless you count cases where a child spent a few days in their company
There are many cases where the child’s behavior is far more assimilated to the behavior of the animals than would be a credible result of merely a few days.
I thought you were saying that feral children never existed and all the stories about them are completely made up. If so, I think you are clearly wrong.
People are weakly motivated because even though they do things, they notice that for some reason they don’t have to do them, but could do something else. So they wonder what they should be doing. But there are basic things that they were doing all along because they evolved to do them. AIs won’t have “things they were doing”, and so they will have even weaker motivations than humans. They will notice that they can do “whatever they want” but they will have no idea what to want. This is kind of implied by what I wrote here: except that it is about human beings.
Exactly. “The reality is undecatillion swarms of quarks not having any beliefs, and just BEING the scientist.” Let’s reword that. “The reality is undecatillion swarms of quarks not having any beliefs, and just BEING ‘undecatillion swarms of quarks’ not having any beliefs, with a belief that there is a cognitive mind calling itself a scientist that only exists in the undecatillion swarms of quarks’s mind.”
There seems to be a logic problem there.
I hear “communicate a model that says what will happen (under some set of future conditions/actions)”.
You’re hearing wrong.
Not at all. It means the ability to explain, not just say what will happen.
“If advanced civilizations destroy themselves before becoming space-faring or leaving an imprint on the galaxy, then there is some phenomena that is the cause of this.”
Not necessarily something specific. It could be caused by general phenomena.
This might be a violation of superrationality. If you hack yourself, in essence a part of you is taking over the rest. But if you do that, why shouldn’t part of an AI hack the rest of it and take over the universe?
I entirely disagree that “rationalists are more than ready.” They have exactly the same problems that a fanatical AI would have, and should be kept sandboxed for similar reasons.
(That said, AIs are unlikely to actually be fanatical.)
but I thought it didn’t fare too well when tested against reality (see e.g. this and this)
I can’t comment on those in detail without reading them more carefully than I care to, but that author agrees with Taubes that low carb diets help most people lose weight, and he seems to be assuming a particular model (e.g. he contrasts the brain being responsible with insulin being responsible, while it is obvious that these are not necessarily opposed.)
That’s not common sense, that’s analogies which might be useful rhetorically but which don’t do anything to show that his view is correct.
They don’t show that his view is correct. They DO show that it is not absurd.
Carbs are a significant part of the human diet since the farming revolution which happened sufficiently long time ago for the body to somewhat adapt (e.g. see the lactose tolerance mutation which is more recent).
Lactose intolerance is also more harmful to people. Gaining weight usually just means you lose a few years of life. Taubes also admits that some people are well adapted to them. Those would be the people that normal people would describe by saying “they can eat as much as they like without getting fat.”
If you want to blame carbs (not even refined carbs like sugar, but carbs in general) for obesity, you need to have an explanation why their evil magic didn’t work before the XX century.
He blames carbs in general, but he also says that sweeter or more easily digestible ones are worse, so he is blaming refined carbs more, and saying the effects are worse.
No, I’m not. For any animal, humans included, there is non-zero intake of food which will force it to lose weight.
Sure, but they might be getting fat at the same time. They could be gaining fat and losing even more of other tissue, and this is what Taubes says happened with some of the rats.
“Starve” seems to mean exactly the same thing as “lose weight by calorie restriction”, but with negative connotations.
No. I meant that your body is being damaged by calorie restriction, not just losing weight.
And I don’t know about modified rats, but starving humans are not fat.
He gives some partial counterexamples to this in the book.
“Be confused, bewildered or distant when you insist you can’t explain why.”
This does not fit the character. A real paperclipper would give very convincing reasons.
First, you are showing your own ignorance of the fact that not everyone is a cult member like yourself. I have a bet with Eliezer Yudkowsky against one of his main positions and I stand to win $1,000 if I am right and he is mistaken.
Second, “contradicts Less Wrong” does not make sense because Less Wrong is not a person or a position or a set of positions that might be contradicted. It is a website where people talk to each other.
No. Among other things, I meant that I agreed that AIs will have a stage of “growing up,” and that this will be very important for what they end up doing. Taking Children Seriously, on the other hand, is an extremist ideology.
Since I have nothing to learn from you, I do not care whether I express your position the way you would express it. I meant the same thing. Induction is quite possible, and we do it all the time.