Since we can imagine a continuous sequence of ever-better-Roombas, the notion of “has beliefs and values” seems to be a continuous one, rather than a discrete yes/no issue.
SforSingularity
Does that have implication for self-awareness and consciousness?
Yes, I think so. One prominent hypothesis is that the reason that we evolved with consciousness is that there has to be some way for us to take an overview of the process of us, our goals, and the environment, and the way in which we think that our effort is producing achievement of goals. We need this so that we can do this whole “I am failing to achieve my goals?” check. Why this results in “experience” is not something I am going to attempt in this post.
As I said,
With the superRoomba, the pressure that the superRoomba applies to the environment doesn’t vary as much with the kind of trick you play on it; it will eventually work out what changes you have made, and adapt its strategy so that you end up with a clean floor.
This criterion seems to separate an “inanimate” object like a hydrogen atom or a pebble bouncing around the world from a superRoomba.
See heavily edited comment above, good point.
Clearly these are two different things; the real question you are asking is in what relevant way are they different, right?
First of all, the Roomba does not “recognize” a wall as a reason to stop going forward. It gets some input from its front sensor, and then it turns to the right.
So what is the relevant difference between the Roomba that gets some input from its front sensor, and then it turns to the right., and the superRoomba that gets evidence from its wheels that it is cleaning the room, but entertains the hypothesis that maybe someone has suspended it in the air, and goes and tests to see if this alternative (disturbing) hypothesis is true, for example by calculating what the inertial difference between being suspended and actually being on the floor would be,
The difference is the difference between a simple input-response architecture, and an architecture where the mind actually has a model of the world, including itself as part of the model.
SilasBarta notes below that the word “model” is playing too great a role in this comment for me to use it without defining it precisely. What does a Roomba not have that causes it to behave in that laughable way when you suspend it so that its wheel spin?
What does the SuperRoomba that works out that it is being suspended by performing experiments involving its inertial sensor, and then hacks into your computer and blackmails you into letting it get back onto the floor to clean it (or even causes you to clean the floor yourself) have?
If we imagine a collection of tricks that you could play on the Roomba, ways of changing its environment outside of what the designers had in mind. The pressure that it applies to its environment (defined as the derivative of the final state of the environment with respect to how long you leave the Roomba on, for example) would then vary with which trick you play. For example if you replace its dirt-sucker with a black spray paint can, you end up with a black floor. If you put it on a nonstandard floor surface that produces dirt in response to stimulation, you get a dirtier floor than you had to start with,
With the superRoomba, the pressure that the superRoomba applies to the environment doesn’t vary as much with the kind of trick you play on it; it will eventually work out what changes you have made, and adapt its strategy so that you end up with a clean floor.
If, however, you programmed the Roomba not to interpret the input it gets from being in midair as an example of being in a room it should clean
then you would be building a beliefs/desires distinction into it.
The difference is between the Roomba spinning and you working for nothing is that if you told the Roomba that it was just spinning its wheels, it wouldn’t react. It has no concept of “I am failing to achieve my goals”. You, on the other hand, would investigate; prod your environment to check if it was actually as you thought, and eventually you would update your beliefs and change your behaviors.
Would you claim the dog has no belief/value distinction?
Actually, I think I would. I think that pretty much all nonhuman animals would also don’t really have the belief/value distinction.
I think that having a belief/values distinction requires being at least as sophisticated as a human. There are cases where a human sets a particular goal and then does things that are unpleasant in the short term (like working hard and not wasting all day commenting on blogs) in order to obtain a long-term valuable thing.
An agent using UDT doesn’t necessarily have a beliefs/values separation,
I am behind on your recent work on UDT; this fact comes as a shock to me. Can you provide a link to a post of yours/provide an example here making clear that UDT doesn’t necessarily have a beliefs/values separation? Thanks.
One possible response here: We could consider simple optimizers like amoeba or Roomba vacuum cleaners as falling into the category: “mind without a clear belief/values distinction”; they definitely do a lot of signal processing and feature extraction and control theory, but they don’t really have values. The Roomba would happily sit with wheels lifted off the ground thinking that it was cleaning a nonexistent room.
Is it possible that the dichotomy between beliefs and values is just an accidental byproduct of our evolution, perhaps a consequence of the specific environment that we’re adapted to, instead of a common feature of all rational minds?
In the normal usage, “mind” implies the existence of a distinction between beliefs and values. In the LW/OB usage, it implies that the mind is connected to some actuators and sensors which connect to an environment and is actually doing some optimization toward those values. Certainly “rational mind” entails a beliefs/values separation.
But suppose we abandon the beliefs/values separation: what properties do we have left? Is the concept “mind without a beliefs/values separation” Simply the concept “thing”?
Thought they nearly discovered my true identity....
The meetup has been good fun. Much conversing, coffee, and a restaurant meal.
It would be an evolutionary win to be interested in things that the other gender is interested in.
Why? I think that perhaps your reasoning is that you date someone based upon whether they have the same interests as you. But I suspect that this may be false—i.e. we confabulate shared interests as an explanation, where the real explanation is status or looks.
Upvoted. I came to exactly the same conclusion. Men are extremophiles, and in (7), Eliezer explained why.
As to Anna’s point below, we should ask how much good can be expected to accumulate from trying to go against nature here, versus how difficult it will be. I.e. spending effort X on attracting more women to LW must be balanced against spending that same effort on something else.
If high intellectual curiosity is a rare trait in males and a very rare one in females, then given that you are here this doesn’t surprise me. You are more intellectually curious than most of the men I have met, which is itself a high intellectual curiosity sample.
his group feels “cliquey”. There are a lot of in-phrases and technical jargon
every incorrect comment is completely and utterly destroyed by multiple people.
These apply to both genders...
The obvious evolutionary psychology hypothesis behind the imbalanced gender ratio in the iconoclastic community is the idea that males are inherently more attracted to gambles that seem high-risk and high-reward; they are more driven to try out strange ideas that come with big promises, because the genetic payoff for an unusually successful male has a much higher upper bound than the genetic payoff for an unusually successful female. … a difference as basic as “more male teenagers have a high cognitive temperature” could prove very hard to address completely.
You ask evo-psych why we have a problem, and evo-psych provides the answer. The gender that has a biological reason to pursue low risk strategies—shockingly! - tends to not show much interest in weird, high-risk, high-payoff looking things like saving the world.
Ask evo-psych how to solve the problem then. We already know that women tend to like doing highly visible charitable activities (for signaling reasons). Maybe we should provide a way for people to make little sacrifices of their time and then make it visible over the web. I am thinking of a rationalist social network that allowed people to very prominently (perhaps even with a publicly visible part here on LW) show off how many hours they had volunteered next to a picture of themselves. I once attended an amnesty international letter writing group that was 90% female, for example.
However, remember that association with any radical sounding idea is high-risk compared to association with a less radical but equally charitable sounding idea. Thus I would predict that women will, on average, tend to not get involved with singularitarianism, transhumanism, existential risks, etc, until these ideas go mainstream.
psychology, yes, definitely. Bio, I do not know, but I would like to see what it looks like for evo psych.
I cannot do this, and I don’t understand anyone who can. If you consciously say “OK, it would be really nice to believe X, now I am going to try really hard to start believing it despite the evidence against it”, then you already disbelieve X.