I will gladly converse with you in Russian if you want to.
Why do you want a united utopia? Don’t you think different people prefer different things? Even if assume the ultimate utopia is unform, wouldn’t we want to experiment with different things to get there?
Would you feel “dwarfed by an FAI” if you had little direct knowledge of what the FAI is up to? Imagine a relatively omniscient and omnipotent god taking care of things on some (mostly invisible) level but doesn’t ever come down to solve your homework.
I am dismayed that you were ambushed by the far right crowd, especially on the welcome thread.
My impression is that you are highly intelligent, very decent and admirably enthusiastic. I think you are a perfect example of the values that I love in this community and I very much want you on board. I’m sure that I personally would enjoy interacting with you.
Also, I am confident you will go far in life. Good dragon hunting!
I sympathize with your sentiment regarding friendship, community etc. The thing is, when everyone are friends the state is not needed at all. The state is a way of using violence or the threat of violence to resolve conflicts between people in a way which is as good as possible for all parties (in the case of egalitarian states; other states resolve conflicts in favor of the ruling class). Forcing people to obey any given system of law is already an act of coercion. Why magnify this coercion by forcing everyone to obey the same system rather than allowing any sufficiently big group of people choose their own system?
Moreover, in the search of utopia we can go down many paths. In the spirit of the empirical method, it seems reasonable to allow people to explore different paths if we are to find the best one.
I would not actually be awfully upset if the FAI did my homework for me...
I used “homework” as a figure of speech :)
Being told “you’re not smart enough to fight dragons, just sit at home and let Momma AI figure it out” would make me sad.
This might be so. However, you must consider the tradeoff between this sadness and efficiency of dragon-slaying.
So really, once superintelligence is possible and has been made, I would like to become a superintelligence.
The problem is, if you instantly go from human intelligence to far superhuman, it looks like a breach in the continuity of your identity. And such a breach might be paramount to death. After all, what makes tomorrow you the same person as today you, if not the continuity between them? I agree with Eliezer that I want to be upgraded over time, but I want it to happen slowly and gradually.
I do think that some kind of organisational cooperative structure would be needed even if everyone were friends—provided there are dragons left to slay. If people need to work together on dragonfighting, then just being friends won’t cut it—there will need to be some kind of team, and some people delegating different tasks to team members and coordinating efforts. Of course, if there aren’t dragons to slay, then there’s no need for us to work together and people can do whatever they like.
And yeah—the tradeoff would definitely need to be considered. If the AI told me, “Sorry, but I need to solve negentropy and if you try and help me you’re just going to slow me down to the point at which it becomes more likely that everyone dies”, I guess I would just have to deal with it. Making it more likely that everyone dies in the slow heat death of the universe is a terribly large price to pay for indulging my desire to fight things. It could be a tradeoff worth making, though, if it turns out that a significant number of people are aimless and unhappy unless they have a cause to fight for—we can explore the galaxy and fight negentropy and this will allow people like me to continue being motivated and fulfilled by our burning desire to fix things. It depends on whether people like me, with aforementioned burning desire, are a minority or a large majority. If a large majority of the human race feels listless and sad unless they have a quest to do, then it may be worthwhile letting us help even if it impedes the effort slightly.
And yeah—I’m not sure that just giving me more processor power and memory without changing my code counts as death, but simultaneously giving a human more processor power and more memory and not increasing their rationality sounds… silly and maybe not safe, so I guess it’ll have to be a gradual upgrade process in all of us. I quite like that idea though—it’s like having a second childhood, except this time you’re learning to remember every book in the library and fly with your jetpack-including robot feet, instead of just learning to walk and talk. I am totally up for that.
I do think that some kind of organisational cooperative structure would be needed even if everyone were friends...
We don’t need the state to organize. Look at all the private organizations out there.
It could be a tradeoff worth making, though, if it turns out that a significant number of people are aimless and unhappy unless they have a cause to fight for...
The cause might be something created artificially by the FAI. One idea I had is a universe with “pseudodeath” which doesn’t literally kill you but relocates you to another part of the universe which results in lose of connections with all people you knew. Like in Border Guards but involuntary, so that human communities have to fight with “nature” to survive.
One idea I had is a universe with “pseudodeath” which doesn’t literally kill you but relocates you to another part of the universe which results in lose of connections with all people you knew.
The following is pure speculation. But I imagine an FAI would begin its work by vastly reducing the chance of death, and then raising everyone’s intelligence and energy levels to those of John_von_Neumann. That might allow us to bootstrap ourselves to superhuman levels with minimal guidance.
Hi Act, welcome!
I will gladly converse with you in Russian if you want to.
Why do you want a united utopia? Don’t you think different people prefer different things? Even if assume the ultimate utopia is unform, wouldn’t we want to experiment with different things to get there?
Would you feel “dwarfed by an FAI” if you had little direct knowledge of what the FAI is up to? Imagine a relatively omniscient and omnipotent god taking care of things on some (mostly invisible) level but doesn’t ever come down to solve your homework.
--
P.S.
I am dismayed that you were ambushed by the far right crowd, especially on the welcome thread.
My impression is that you are highly intelligent, very decent and admirably enthusiastic. I think you are a perfect example of the values that I love in this community and I very much want you on board. I’m sure that I personally would enjoy interacting with you.
Also, I am confident you will go far in life. Good dragon hunting!
--
I wouldn’t call it an ambush, but in any case Acty emerged from that donnybrook in quite a good shape :-)
So pointing out flaws in someone’s position is now “ambushing” them?
Disagreeing is ok. Disagreeing is often productive. Framing your disagreement as a personal attack is not ok. Lets treat each other with respect.
I sympathize with your sentiment regarding friendship, community etc. The thing is, when everyone are friends the state is not needed at all. The state is a way of using violence or the threat of violence to resolve conflicts between people in a way which is as good as possible for all parties (in the case of egalitarian states; other states resolve conflicts in favor of the ruling class). Forcing people to obey any given system of law is already an act of coercion. Why magnify this coercion by forcing everyone to obey the same system rather than allowing any sufficiently big group of people choose their own system?
Moreover, in the search of utopia we can go down many paths. In the spirit of the empirical method, it seems reasonable to allow people to explore different paths if we are to find the best one.
I used “homework” as a figure of speech :)
This might be so. However, you must consider the tradeoff between this sadness and efficiency of dragon-slaying.
The problem is, if you instantly go from human intelligence to far superhuman, it looks like a breach in the continuity of your identity. And such a breach might be paramount to death. After all, what makes tomorrow you the same person as today you, if not the continuity between them? I agree with Eliezer that I want to be upgraded over time, but I want it to happen slowly and gradually.
I do think that some kind of organisational cooperative structure would be needed even if everyone were friends—provided there are dragons left to slay. If people need to work together on dragonfighting, then just being friends won’t cut it—there will need to be some kind of team, and some people delegating different tasks to team members and coordinating efforts. Of course, if there aren’t dragons to slay, then there’s no need for us to work together and people can do whatever they like.
And yeah—the tradeoff would definitely need to be considered. If the AI told me, “Sorry, but I need to solve negentropy and if you try and help me you’re just going to slow me down to the point at which it becomes more likely that everyone dies”, I guess I would just have to deal with it. Making it more likely that everyone dies in the slow heat death of the universe is a terribly large price to pay for indulging my desire to fight things. It could be a tradeoff worth making, though, if it turns out that a significant number of people are aimless and unhappy unless they have a cause to fight for—we can explore the galaxy and fight negentropy and this will allow people like me to continue being motivated and fulfilled by our burning desire to fix things. It depends on whether people like me, with aforementioned burning desire, are a minority or a large majority. If a large majority of the human race feels listless and sad unless they have a quest to do, then it may be worthwhile letting us help even if it impedes the effort slightly.
And yeah—I’m not sure that just giving me more processor power and memory without changing my code counts as death, but simultaneously giving a human more processor power and more memory and not increasing their rationality sounds… silly and maybe not safe, so I guess it’ll have to be a gradual upgrade process in all of us. I quite like that idea though—it’s like having a second childhood, except this time you’re learning to remember every book in the library and fly with your jetpack-including robot feet, instead of just learning to walk and talk. I am totally up for that.
We don’t need the state to organize. Look at all the private organizations out there.
The cause might be something created artificially by the FAI. One idea I had is a universe with “pseudodeath” which doesn’t literally kill you but relocates you to another part of the universe which results in lose of connections with all people you knew. Like in Border Guards but involuntary, so that human communities have to fight with “nature” to survive.
Sort of a cosmic witness relocation program! :).
The following is pure speculation. But I imagine an FAI would begin its work by vastly reducing the chance of death, and then raising everyone’s intelligence and energy levels to those of John_von_Neumann. That might allow us to bootstrap ourselves to superhuman levels with minimal guidance.