(I’m not even sure a singleton can start off not being bad.)
The context here is attempting to agree with other agents about ethics. A singleton doesn’t have that problem. Being a singleton means never having to say you’re sorry.
Clear thinkers who can communicate cheaply are automatically collectively a singleton with a very complex utility function. No-one generally has to attempt to agree with other agents about ethics, they only have to take actions that take into account the conditional behaviors of others.
If we accept these semantics (a collection of clear thinkers is a “singleton” because you can imagine drawing a circle around them and labelling them a system), then there’s no requirement for the thinkers to be clear, or to communicate cheaply. We are a singleton already.
Then the word singleton is useless.
No-one generally has to attempt to agree with other agents about ethics, they only have to take actions that take into account the conditional behaviors of others.
This is playing with semantics to sidestep real issues. No one “has to” attempt to agree with other agents, in the same sense that no one “has to” achieve their goals, or avoid pain, or live.
You’re defining away everything of importance. All that’s left is a universe of agents whose actions and conflicts are dismissed as just a part of computation of the great Singleton within us all. Om.
Yes, I think others are missing your point here. The bits about being clear thinkers and communicating cheaply are important. It allows them to take each other’s conditional behavior into account, thus acting as a single decision-making system.
But I’m not sure how useful it is to call them a singleton, as opposed to reserving that word for something more obvious to draw a circle around, like an AI or world hegemony.
(I’m not even sure a singleton can start off not being bad.)
The context here is attempting to agree with other agents about ethics. A singleton doesn’t have that problem. Being a singleton means never having to say you’re sorry.
Clear thinkers who can communicate cheaply are automatically collectively a singleton with a very complex utility function. No-one generally has to attempt to agree with other agents about ethics, they only have to take actions that take into account the conditional behaviors of others.
What?
If we accept these semantics (a collection of clear thinkers is a “singleton” because you can imagine drawing a circle around them and labelling them a system), then there’s no requirement for the thinkers to be clear, or to communicate cheaply. We are a singleton already.
Then the word singleton is useless.
This is playing with semantics to sidestep real issues. No one “has to” attempt to agree with other agents, in the same sense that no one “has to” achieve their goals, or avoid pain, or live.
You’re defining away everything of importance. All that’s left is a universe of agents whose actions and conflicts are dismissed as just a part of computation of the great Singleton within us all. Om.
I’m not sure what you mean by “singleton” here. Can you define it / link to a relevant definition?
http://www.nickbostrom.com/fut/singleton.html
Thanks—that’s what I thought it meant, but your meaning is much more clear after reading this.
Yes, I think others are missing your point here. The bits about being clear thinkers and communicating cheaply are important. It allows them to take each other’s conditional behavior into account, thus acting as a single decision-making system.
But I’m not sure how useful it is to call them a singleton, as opposed to reserving that word for something more obvious to draw a circle around, like an AI or world hegemony.