Have you seen this explored in mathematical language? Cause it’s all so weird that there’s no way I can agree with Hofstadter to that extent. As yet, I don’t know really know what “smart” means.
PrometheanFaun
I’ve never recognised a more effective psychonaut than you. You’ve probably seen further than I, so I’d appreciate your opinion on a hypo I’ve been nursing.
You see the way pain reacts to your thoughts. If you respect its qualia, find a way to embrace them, that big semi-cognisant iceberg of You, the Subconscious, will take notice, and it will get out of your way, afford you a little more self control, a little less carrot and stick, a little less confusion, a little closer to the some rarely attained level of adulthood.
I suspect that every part of the subconscious can be made to yield in the same way. I think introspective gains are self-accelerating, you don’t just get insights and articulations, you get general introspection skills. I seem to have lost hold of it for now, but I once had what seemed to be an ability to take any vague emotional percept and unravel it into an effective semantic ordinance. It was awesome. I wish I’d been more opportunistic with it.
I get the impression you don’t share my enthusiasm for the prospect of developing a culture supportive of deep subconscious integration, or illumination or whatever you want to call it. What have you seen? Found a hard developmental limit? Or, this is fairly cryptic, do tell me if this makes no sense, but are you hostile to the idea of letting your shadow take you by the hand and ferry you over the is-aught divide? I suspect that the place it would take you is not so bad. I think any alternative you might claim to have is bound to turn out to be nothing but a twisted reflection of its territories.
As I understand it, Hofstadter’s advocacy of cooperation was limited to games with some sense of source-code sharing. Basically, both agents were able to assume their co-players had an identical method of deciding on the optimal move, and that that method was optimal. That assumption allows a rather bizarre little proof that cooperation is the result said method arrives at.
And think about it, how could a mathematician actually advocate cooperation in pure, zero knowledge vanilla PD? That just doesn’t make any sense as a model of an intelligent human being’s opinions.
Sometimes I will stand and look at the church and wonder if today is the day I get desperate enough to go full sociopath, pretend to join the flock, and use the network to start a deviant christianity offshoot.
I don’t know Civ, but for practising the kind of strategizing you’re describing I’d recommend Neptune’s Pride.
and I’ve known people for whom the opposite was tragically true.
Heh. I’m one of those people. I practically fell in love with my first ally. I’m lucky they were really nice when they broke my lines, essentially throwing me a sword and telling me to defend myself before starting the invasion. I’d have been heartbroken otherwise. I guess to an extent I thought they were damning us both to death by zombie bot rush by breaking our alliance, but their judgement was apt, after crippling me they proceeded to conquer the galaxy, barely worse for wear.
It was from this game that I learned the reason I have an intermittent habit of falling head over heels in love with friends probably has more to do with diplomacy than anything else. I can rapidly build unreasonably strong alliances from nothing this way, at the cost of forming a few confusing, inconvenient bonds when I hit the wrong target. It’s always nice to learn that the quirks of your mechanism serve a purpose.
Also, is there some place Lesswrongians go for real-time chat?
IRC channel, #lesswrong on irc.freenode.net
But now I’ve just discovered that argumentum ad governess is invalid
Where was the argument for that? Non-humans attaining rights by a different path does not erase all other paths.
“If the inequitable society has greater total utility, it must be at least as good as the equitable one” would still hold though, no?
Well… …. yeah, technically. But for example in the model ( worlds={A, B}, f(W)=sum(log(felicity(e)) for e in population(W)) ), such that world A=(2,2,2,2), and world B=(1,1,1,9). f(A) ≥ f(B), IE ¬(f(A) < f(B)), so ¬(A < B), IE, the equitable society is also at least as good as the inequitable, higher sum utility one. So if you want to support all embeddings via summation of an increasing function of the units’ QoL.. I’d be surprised if those embeddings had anything in common aside from what the premises required. I suspect anything that agreed with all of them would require all worlds the original premises don’t relate to be equal, IE, ¬(A<B) ∧ ¬(B<A).
… looking back, I’m opposed to your implicit definition of a ” “baseline” ”, the original population partial ordering premises are the baseline, here, not total utilitarianism.
I propose a new term for what we’re trying to do here, not for-profit, nor not-for-profit, but for-results.
The Carcenogen is already doing all it can to demolish any grand central church of atheism that might or might not exist, For example, this kind of antimeme spreads like wildfire. There is no need for us to do anything to encourage dispersal and mutation, it is already underway. And, I’m not sure about this, but doesn’t humanity already have swarm intelligence setups for generating new concepts, new categories for people? I wouldn’t expect we’d need a machine to do that for us.
Second, there is absolutely no reason for us to settle for an idea that is not profitable.
Would Xodarap agree that the premises are (assuming we have operator overloads for multisets rather than sets)
the better set is a superset (A ⊂ B) ⇒ (A < B)
or everything in the better set that’s not in the worse set is better than everything that’s in the worse set that’s not in the better set, (∀a∈(A\B), b∈(B\A) value(a) < value(b)) ⇒ (A < B)
If the inequitable society has greater total utility, it must be at least as good as the equitable one.
No, the premises don’t necessitate that. “A is at least as good as B”, in our language, is ¬(A < B). But you’ve stated that the lack of an edge from A to B says nothing about whether A < B, now you’re talking like if the premises don’t conclude that A < B they must conclude ¬(A < B), which is kinda affirming the consequent.
It might have been a slip of the tongue, or it might be an indication that you’re overestimating the significance of this alignment. These premises don’t prove that a higher utility inequitable society is at least as good as a lower utility equitable one. They merely don’t disagree.
I may be wrong here, but it looks as though, just as the premises support (A < B) ⇒ (utility(A) < utility(B)), they also support (A < B) ⇒ (normalizedU(A)) < normalizedU(B))), such that normalizedU(World) = sum(log(utility(life)) for life in elements(World)) a perfectly reasonable sort of population utilitarianism where utility monsters are fairly well seen to. In this case equality would usually yield greater betterness than inequality despite it being permitted by the premises.
Great answer, I know this is something I need to do more in life anyway. So I did a little bit of it just now. Sudden increase in levels of curiosity[so virtuous. Wow.]. I’m so curious I even want to know crap like why my housemate sometimes leaves a spoon stuck in the coffee grounds of the compost container. Obviously they used the spoon to move the grounds in there, but why did they leave it stuck there rather than moving it to the cutlery dip in the wash basin? Now that is an extraordinarily minor detail- take that as an indication of just how motivating it is to suspect that you don’t look closely enough at the details of your life to know whether you’re in a shoddy simulation.
That doesn’t answer the question? I’m pretty sure a honed attentiveness to the consistency of text wouldn’t raise my overall sanity waterline.
I tell everyone this all the time. Thankyou AGI, maybe now they’ll believe me.
I come from the future with a refutation from the past! http://lesswrong.com/lw/8gv/the_curse_of_identity/
Lesswrong’s threads have defeated Death.
Howdy FourFire. At some point after conceiving of a particularly lofty particularly involving plot[details available on request for LWers], I stopped trying to befriend people who wouldn’t feature anywhere in it. Whoever I’m with, there’s always an objective, though I’ll often have to pretend there isn’t and come at it sideways, which only makes it more fun.
For me there are two kinds of people, people I can do something with, and people I’ve got nothing to do with.
That’s contrary to my experience of epistimology. It’s just a word, define it however you want, but in both epistemic logic and pragmatics-stripped conventional usage, possibility is nothing more than a lack of disproof.